How Brutish are Brute Facts?

For the love of the pursuit of knowledge

Moderator: Moderators

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

How Brutish are Brute Facts?

Post #1

Post by harvey1 »

QED and I started discussing this subject on the nature of brute facts on another thread in the Christianity sub-forum. However, I wanted to make this discussion a separate thread so we can fully explore this subject matter:
QED wrote:
harvey1 wrote:The cost of atheism is that we have to believe something is a brute fact that happens to be the most complex object we have ever observed. In fact, so complex is it, that we can't come anywhere close to emulating such a design using supercomputers and the like, yet you expect us to believe such a departure from parsimony as being parsimonious!
I have pointed out to you often that this argument carries no weight because the nature of discovery and human understanding is fickle. It is an indisputable fact that we frequently miss that which is right-in-front-of-our-noses because we are often using the wrong tools or mindset. This is, after all, what we keep accusing each other with in many of our discsussions. So I am adamant that we cannot use our lack of savvy to assess the possibility or complexity of an unknown entity.
Let me put your argument in a more formal form so it is easier for me to point out the fallacious nature of that argument:
  1. Brute facts are needed in every ultimate explanation of the world
  2. The brute fact for atheism is that there is a (meta)universe
  3. There are no prescriptive laws that determine or restrict brute facts
  4. We have no way to evaluate the complexity, likelihood, or probability of this (meta)universe brute fact to bring about universes such as our own
  5. There's no reason based on (4) to believe that it ought to be obvious or simple to simulate a world which naturally produces complexity that in principle can bring about universes such as our own
  6. The observable universe can naturally be explained in terms of a brute fact (meta)universe that is allowed to evolve over time such that at some point in this process our universe naturally appears
  7. The brute fact (meta)universe, according to (6), is a natural explanation
  8. Occam's razor requires that we believe the most parsimonious explanation--which translates into a natural explanation
...9C. The universe is a consequence of a brute fact (meta)universe needing no God to explain its existence: God is unlikely to exist

Now, I'm sure you would like to make changes to the above argument, however I think no matter how you change it, it is a faulty argument. For example, (3) appears to contradict (4). If there are no prescriptive law limitations that determine the brutish nature of your brute fact (meta)universe, then absolutely anything is possible even brute fact scenarios that do not lead to universes with sophisticated structures. However, if anything is possible, then we do have a means to gauge likelihood. We have many conceptions of behaviors that the (meta)universe could have exhibited as a brute fact behavior. There are literally thousands or millions of behaviors that we can imagine that would never produce a universe such as our own. Hence, it appears the likelihood of a (meta)universe having a behavior that evolves sophisticated structures as our own looks diminishing small compared to the large number of brute fact (meta)universes that would not do anything of the sort. Hence, (4) is false. If (4) is false, the (5) is false. If (5) is false, then this is not a parsimonious solution (7), and hence it violates Occam's razor ( 8 ), and therefore not only is your conclusion false, but any explanation that doesn't violate Occam's razor should be more likely to be considered true (e.g., a belief in an Omniscient Interpeter, God).
QED wrote:
harvey1 wrote:If the [meta]universe was to evolve, it had to allow complex structures to evolve. This behavior cannot be programmed, not anything close. I realize you think that there might be a set of behaviors out there that a 1-billion line cellular automata algorithm could accurately simulate which does the trick, but that still doesn't answer why the metauniverse didn't have a behavior that a 10 line cellular automata algorithm would describe (e.g., a "beacon" metauniverse). Why do we not live in a beacon universe? We know your answer: "because we don't." But, that is not a good answer.
Do you deny that the majority of the worlds cosmologists working today are willing to accept scenarios where this is not the only universe that ever existed? If it rarely came up for serious consideration, that might make it "not a good answer" but I think you'll find it is a better answer than that.
QED, you're mixing up this issue. Cosmologists proceed based on prescriptive laws that other universes are likely given those prescriptive laws (e.g., quantum cosmological laws, or inflationary laws due to quantum laws, etc.). What you are saying of a brute fact (meta)universe has absolutely nothing in common with these scientific theories. You aren't basing your views on any law. You are basing it on a brute fact that has no prescriptive law that determines its truth or falsity. In fact, it is very difficult for me to access how it is that a principle of parsimony is even a concern for you since a principle of parsimony would be a prescriptive law, and you say there are no prescriptive laws. So, why do you limit brute facts to a principle of parsimony as a prescriptive law? Of course, if you don't do that, then your view becomes an irrational view, and as we agree, if there is a rational explanation and an irrational explanation, we are obligated to give precedence to the rational explanation.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #31

Post by Bugmaster »

As I was saying before I hit the wrong button:

Our models of physical laws do refer to some properties of the real world. Some of our models are probably wrong, but that's ok, because we have the real world out there to test them against. By repeated testing, our understanding of the real physical laws improves daily.

Logical implications and math are tools we use to construct these models. These tools are not subject to testing, because we just made them up in order to help us build the models. They are true by definition, and don't refer to anything but other made-up things.

This is why I claim that implications and such do not physically exist -- at least, not in the same way that laws of nature exist.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #32

Post by harvey1 »

Let me try to understand your position from basic postulates. Here's how I summarize your argument:
  1. Humans experience the natural world, and as a result of this experience, they invent logic and mathematics based on rules of logical implication which is also invented
  2. The rules of logical implication are true by definition (e.g., "x equals x"), they are not scientific models they are mainly conceptual tools
  3. Logical implication is a building block to scientific models, however the abstract models deal with a different subject, and are not subject to testing and observation of the real world
  4. Testing and observation allow scientific models to be considered very accurate depictions of a scientifically accessible world, and therefore refer to some properties of the real world
  5. Although a form of scientific realism is justified given (4), we must remain fallibilistic and skeptical with regard to the nature of our theories exactly matching the nature of the real world.
Does this accurately sum up your views on scientific realism along with your nominalism?

If so, I have some questions:

(a) What justifies your scientific realism given the Quine-Duhem thesis? I know you responded to the QD thesis as follows:
If my theory predicts that this rock will float upward, and the rock falls downward, then my theory is false. You can always modify the theory, saying something like, "oh, only magnetic rocks float upward, not regular rocks", but the original theory is still false.
However, I think this claim is irrevelant if the original theory is "false" since the issue is that an original "false" theory can be made "true" by correcting an underlying assumption. If truth and falsity is established based purely on matching a theory's observables with reality, then how do you justify your scientific realism if it hinges only on matching observables with reality?

(b) If our conceptual tools (logic and mathematics) are completely false, and also given the QD thesis, then on what basis do you say that our theories can be "accurate depictions" of reality? For example, the Ptolemaic model was based on false assumptions (e.g., geocentricism), so how could such a model be true if its assumptions are false? Similarly, if scientific models are all based on false assumptions (i.e., the conceptual tools of logic and math), then how could our scientific models be true or approximately true? Even the Ptolemaic model was considered for the most part approximately true during the later years before its final rejection, and it was able to maintain this status by "corrections" to the main model. We now know, of course, that in principle such a theory could never be true since its assumptions were false. Similarly, how can you say that in principle our theories can be approximately true when you admit that the "building blocks" can be essentially false?

(c) Going back to the Eightfold Way, can you explain how a simple model based almost straightforwardly on group theory (i.e., logical implication), can predict a simple arrangement of baryons and mesons into octets (and spin-3/2 baryons forming a decuplet), even to the point of predicting the Ω− particle where there were no observations that had indicated that a particle was missing? How did this simple model also predict the particles strangeness at -3, electric charge -1, and a mass near 1680 MeV/c2? Wouldn't it have to be pure luck if group theory is a useful fiction?

That's enough for now...

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #33

Post by Bugmaster »

harvey1 wrote:1). Humans experience the natural world, and as a result of this experience, they invent logic and mathematics based on rules of logical implication which is also invented
I'd agree with this, sort of, but my mathematician friends probably won't... they'd say that the rules of mathematics are independent of our experience; thus, even a man who has spent his entire life in a sensory deprivation chamber would be able to invent them (assuming that such a man could think at all, which is a different story).
Does this accurately sum up your views on scientific realism along with your nominalism?
Pretty much, except for the detail in (1). I wouldn't call logicial implications "models", but I guess that's just nitpicking on my part.
However, I think this claim is irrevelant if the original theory is "false" since the issue is that an original "false" theory can be made "true" by correcting an underlying assumption.
Ok, I guess I misunderstood what QD really means. Can you give an example of how I could change an underlying assumption in order to make my floating-rock theory true ? I don't see how I could do that, without violating the correspondence between the theory and observable reality, which would nullify the explanatory power of the theory.
If truth and falsity is established based purely on matching a theory's observables with reality, then how do you justify your scientific realism if it hinges only on matching observables with reality?
Er... I thought this is what scientific realism meant ? We construct a theory, test it against reality, and the more tests it passes, the more sure we become of its truth. As soon as it fails a test -- i.e., as soon as one of its predictions diverges with what we actually observe -- we're forced to label the theory as false, and construct a new one (which would often be a modification of the invalidated theory). Or do you mean something different by "observables" ?
If our conceptual tools (logic and mathematics) are completely false, and also given the QD thesis, then on what basis do you say that our theories can be "accurate depictions" of reality?
As you said above,
2). The rules of logical implication are true by definition (e.g., "x equals x"), they are not scientific models they are mainly conceptual tools
Thus, our conceptual tools cannot be false. They're true by definition. That's not the same kind of truth as scientific "truth", of course. Scientific theories can never be said to be fully "true"; the best we can say is, "this theory corresponds to the physical phenomena it describes with a very high degree of certainty". In logic, on the other hand, you simply assign truth values to some initial statements, and build up from there.
Similarly, if scientific models are all based on false assumptions (i.e., the conceptual tools of logic and math), then how could our scientific models be true or approximately true? Even the Ptolemaic model was considered for the most part approximately true during the later years before its final rejection...
Again, there's a key difference between logical postulates, and the assumptions of the Ptolemaic model. Logical postulates are assumed to be true by definition, and they refer to the building blocks of logic, which we construct in our heads. But the Ptolemaic model refers to the motion of the planets in the real world, which can be observed directly. It falls into the realm of scientific "truth", whereas logical implications do not.
We now know, of course, that in principle such a theory could never be true since its assumptions were false.
On a sidenote, I think that if you expand the Ptolemaic model to an infinite number of epicycles, you'll get a series expansion of our current orbital mechanics... I think I remember reading this somewhere, but I haven't done the math myself, so I could be wrong. Anyway, that was totally offtopic, sorry.
How did this simple model also predict the particles strangeness at -3, electric charge -1, and a mass near 1680 MeV/c2? Wouldn't it have to be pure luck if group theory is a useful fiction?
As I said earlier, my quantum-fu is weak. However, AFAIK, the eightfold way is not mere group theory; it's an application of group theory to empirical data. Thus, it is a scientific theory that makes predictions that have been verified to be accurate... nothing wrong with that.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #34

Post by harvey1 »

Bugmaster wrote:Ok, I guess I misunderstood what QD really means. Can you give an example of how I could change an underlying assumption in order to make my floating-rock theory true ? I don't see how I could do that, without violating the correspondence between the theory and observable reality, which would nullify the explanatory power of the theory.
Obviously, the more directly we interact with familiar objects, the deeper our prejudice when it comes to the acceptability of the underlying assumptions. However, one can explain why a theory predicted incorrectly that a rock will float upward when in fact the rock falls downward. For example, one can say that your false theory was based on there being no "impurities" in the liquid that the rock is dropped in. These impurities in the liquid create a hyper-Higgs field, and the hyper-Higgs field offsets the natural direction of the immersed rock. What is really interesting about the hyper-Higgs field is that it explains why the universe is expanding versus contracting (for similar reasons that the rock sinks rather than floats). There's even evidence from deep space that the hyper-Higgs field can "account for unexplained high energy phenomena in the cosmos involving gamma rays or other astro-particles."

Of course, this is easily to be rejected despite the explanation being able to technically explain an easily explained phenomena, but the point is that as we move away from easily explained phenomena to phenomena that is not so easily explained, this process of keeping false theories alive becomes serious business that people invest their careers in defending. Explaining observables is not the sole reason why a theory is rejected, since it is common knowledge that scientists can keep their pet theory alive as long as anyone will listen.
Bugmaster wrote:
harvey1 wrote:If truth and falsity is established based purely on matching a theory's observables with reality, then how do you justify your scientific realism if it hinges only on matching observables with reality?
Er... I thought this is what scientific realism meant ? We construct a theory, test it against reality, and the more tests it passes, the more sure we become of its truth.
Well, antirealists of course believe that the observables of a theory are matched against our observations, however they don't argue that this means the theory is true. My question is how do you justify your belief that the theory is true or approximately true given the situation where someone can adjust their pet theory to account for the correct observations that were made? In my realist view, this issue is settled in a holistic fashion with respect to a theory's overall merit. That means that I think there must be something true about logic and math (and a principle of parsimony and a principle of aesthetics...) in order for me to make that assertion. If I say that logic and math are "true" by definition, that doesn't bode well for my realist argument since I am therefore saying that my theory is true by definition. It's not a pleasant thing for truth if everyone walks around saying their theory is true by definition.
Bugmaster wrote:As soon as it fails a test -- i.e., as soon as one of its predictions diverges with what we actually observe -- we're forced to label the theory as false, and construct a new one (which would often be a modification of the invalidated theory). Or do you mean something different by "observables" ?
But, who says that you have to modify a theory to make it more true (or more approximately true)? You can perfectly account for an observation by modifying the invalidated theory in more cockeyed ways. At no point are you ever forced to come to grips with reality if you are not restricted by a principle of parsimony. However, if you don't consider a principle of parsimony as the way nature actually works, then that begs the question as to how you can be a realist since the principle of parsimony is an invention, not an actual state of affairs that we claim is what truth actually is.
Bugmaster wrote:"this theory corresponds to the physical phenomena it describes with a very high degree of certainty". In logic, on the other hand, you simply assign truth values to some initial statements, and build up from there.
When you say a theory corresponds "with a very high degree of certainty," I take this to mean that you consider a good theory to be approximately true (i.e., high degree of certainty is modifying our reason to believe in the theory as approximately true), hence the reason why you are a realist, right? However, as we discussed, any theory can be modified to correspond "to the physical phenomena it describes with a very high degree of certainty" in terms of accounting for the observations themselves. So, why do you take the added jump from just having certainty with regards to accounting for the observables to being highly certain the theory is approximately true?
Bugmaster wrote:As I said earlier, my quantum-fu is weak. However, AFAIK, the eightfold way is not mere group theory; it's an application of group theory to empirical data. Thus, it is a scientific theory that makes predictions that have been verified to be accurate... nothing wrong with that.
Sure, however Gell-Mann (and, independently, Ne'eman) just applied SU(3) symmetry by grouping light hadrons into SU(3) multiplets. Now, why should a scientific model with not much more than just applying a logical implication (SU(3) symmetry) tell you about the properties of a particle that had never been seen before? How can you say with a straight face that it has nothing to do with SU(3) being a true depiction of how nature works on some fundamental level?

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #35

Post by Bugmaster »

Gah ! I was sure I'd replied to this already, but I must have forgotten to hit "Submit". Here we go, take two...
harvey1 wrote:However, one can explain why a theory predicted incorrectly that a rock will float upward when in fact the rock falls downward. For example, one can say that your false theory was based on there being no "impurities" in the liquid that the rock is dropped in.
Ok, so now I have two theories: one about rocks, and one about the hyper-Higgs field (sorry, I don't actually know what that is, but I don't think it matters in this context). Thus, I now have two theories to test instead of one. More work for me.
Of course, this is easily to be rejected despite the explanation being able to technically explain an easily explained phenomena
I think you're confusing a theory and a hypothesis. I can propose any number of hypotheses to explain any phenomena, but they will only become theories -- i.e., they will only become worthy of notice by other people -- when I test them against the real world somehow. Thus, proposing increasingly more outrageous hypotheses is not in my interest, because I'll never be able to support them all, human lifespan being limited as it is.
Explaining observables is not the sole reason why a theory is rejected, since it is common knowledge that scientists can keep their pet theory alive as long as anyone will listen.
That's true, but, as I said earlier, mathematical elegance and simplicity are merely heuristics, not hard criteria. Those physicist-bots you mentioned earlier would not need these heuristics, since, by the virtue of their thought-experimental powers, they'd be able to thouroughly test any hypothesis that comes their way.
My question is how do you justify your belief that the theory is true or approximately true given the situation where someone can adjust their pet theory to account for the correct observations that were made?
If they adjust their hypothesis to include more hypotheses, they will need to verify those as well, as I'd mentioned in the hyper-Higgs field above. The underlying principle is the same: the predictions made by your theories have to match what we observe when we test the theories, no matter how many theories you propose.
If I say that logic and math are "true" by definition, that doesn't bode well for my realist argument since I am therefore saying that my theory is true by definition. It's not a pleasant thing for truth if everyone walks around saying their theory is true by definition.
No, that's not correct -- or, at least, this is not what I mean when I talk about theories. Let me put it this way:

As I see it, a theory can be thought of as a black box with some sort of a mechanism inside. The box makes predictions about things we should observe. The theory is, as you put it, "approximately true", if all of the predictions it makes are supported by the evidence that we actually observe.

From the strictly pragmatic perspective, it does not matter how the mechanism inside the box works. As long as it makes accurate predictions, the theory is approximately true. However, this black box view of a theory is obviosuly not very useful -- how would we discuss it, or share it with other people, or figure out what it's predicting in the first place ?

In order to do all that, we need to open the box, and look at the guts of the mechanism -- logical implications, math, etc. We can now put the theory to the test, or put it to use; in fact, this is the only way we can wrap our heads about it at all. However, note that the guts of the mechanism are irrelevant as far as the scientific truth of the theory is concerned; what matters is the predictions it makes.

Logical implications and such are the tools we use to construct theories. We could invent whatever kinds of tools we want, as long as they do the job; when some better tools come along, we can discard our old ones. This has actually happened more than once: for example, the transition from perfectly circular orbits with epicycles to ellipses, or the transition from trigonometry to complex numbers in electromagnetism, or the transition from Euclidean to Riemann geometry in cosmology. I'm sure this will happen many times in the future.
At no point are you ever forced to come to grips with reality if you are not restricted by a principle of parsimony.
As I'd mentioned before, this isn't true. The more claims you make, the more evidence you need to justify them, otherwise they'll remain mere hypotheses.
However, if you don't consider a principle of parsimony as the way nature actually works, then that begs the question as to how you can be a realist since the principle of parsimony is an invention, not an actual state of affairs that we claim is what truth actually is.
I believe that the principle of parsimony, as applied to science, is a heuristic that reduces our experimental testing workload.

Now that I'm done defending my views, let me attack yours for a bit :-)

You make a claim that logical implications, the principle of parsimony, etc., do exist in reality. If this is indeed a scientific hypothesis, then it must make predictions, and these predictions need to be falsifiable. So... what experiment would you devise in order to disprove your hypothesis (by falsifying its predictions) ? All other scientific theories, such as Newton's Laws, relativity, quantum mechanics, etc., make statements about physical objects (big ones like cannonballs, or small ones like particles and waves). Thus, we can go out into the real world, find us some physical objects (in some cases, we'd have to build a supercollider first, but the principle is the same), smash them together and see what happens.

Can we smash together logical implications ? I fail to see how this could be done, at all. I can see only two ways out of this predicament:

1). Logical implications do not exist in reality.
2). Logical implications do exist, but not in the same way that Newton's Laws exist; in other words, dualism is true.

This is a dilemma; I'm obviously picking (1), and it sounds to me like you're picking (2), unless you can explain to me how logical implications are falsifiable, or provide an alternative third choice.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #36

Post by harvey1 »

Bugmaster wrote:Ok, so now I have two theories: one about rocks, and one about the hyper-Higgs field... Thus, I now have two theories to test instead of one. More work for me... I can propose any number of hypotheses to explain any phenomena, but they will only become theories -- i.e., they will only become worthy of notice by other people -- when I test them against the real world somehow... If they adjust their hypothesis to include more hypotheses, they will need to verify those as well, as I'd mentioned in the hyper-Higgs field above. The underlying principle is the same: the predictions made by your theories have to match what we observe when we test the theories, no matter how many theories you propose... The more claims you make, the more evidence you need to justify them, otherwise they'll remain mere hypotheses...
Let's try and nail this down a bit:
  1. Original theory S1 became a theory because its predictions P1, P2, P3,..., PN were all correctly observed
  2. S1 does not make any new Q predictions that are observable (either too costly, or beyond the range of current observation technology, etc.)
  3. Newly proposed theory T1 has been constructed and provides the same predictions P1, P2, P3,..., PN as S1.
  4. T1 does not provide any new Q predictions that are observable (either too costly, or beyond the range of current observation technology, etc.)
  5. Quine-Duhem thesis tells us that for every SN theory, a TN theory is always possible to construct which share the same predictions P1, P2, P3,..., PN. Any other Q prediction that could in principle differentiate S and T are inaccessible to us ((2) & (4))
  6. Approximate truth of a theory is established solely based on P1, P2, P3,..., PN of a theory
  7. Ergo, approximate truth of a theory is indeterminate since (6) forbids other means than P1, P2, P3,..., PN to determine approximate truth, and (5) allows other theories besides S to predict P1, P2, P3,..., PN
Now, based on this argument, how do you establish S's novel predictions as justification for S's approximate truth when another T theory can produce the same predictions? It seems you want to say S predicted P1, P2, P3,..., PN before T was created, but surely you don't think that "I had P first" to be reason to establish the truth of S over T, do you?
Bugmaster wrote:You make a claim that logical implications, the principle of parsimony, etc., do exist in reality. If this is indeed a scientific hypothesis, then it must make predictions, and these predictions need to be falsifiable.
It's not a scientific hypothesis in the same way that scientific methods are not scientific hypotheses. Do you think that we must verify every standard by the standard we are proposing? What we are talking about here is the standard to consider something true. Scientific hypotheses assume that we have already determined that standard prior to using the methods of science. It's a philosophy of science issue.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #37

Post by Bugmaster »

Firstly, I disagree -- in part -- with your statements 2 and 4:
2). S1 does not make any new Q predictions that are observable (either too costly, or beyond the range of current observation technology, etc.)
4). T1 does not provide any new Q predictions that are observable (either too costly, or beyond the range of current observation technology, etc.)
The predictions that your theory makes have to be observable, otherwise the theory has no value. For example, I could hypothethize that there are purple smurfs living on a distant planet in the Andromeda galaxy, but, since we have no way of falsifying this claim, my hypothesis is void. It's an interesting thing to think about, but it carries no scientific value.

In fact, I believe the Nobel Prize committee used exactly this argument when they gave Einstein the prize for the photoelectric effect, and not for his theory of relativity... and I think they were right to do so, given the information they had at the time.

Furthermore, I'm still a bit fuzzy on statement 5:
5). Quine-Duhem thesis tells us that for every SN theory, a TN theory is always possible to construct which share the same predictions P1, P2, P3,..., PN.
So, essentially, the only difference between SN and TN is the internal mechanism. I see no problem with having multiple theories that explain the same phenomena. However, if we assume that there exists an external world that operates by some sort of fixed rules, then our goal is to expose as many of these rules as accurately as possible. Thus, it would be more efficient for us to seek theories that unify other theories -- because these theories would have more explanatory power. Thus, if SN and TN make equivalent predictions, but SN can be unified with some other theory QN, then SN is preferable. In essence, our choices are not between SN and TN, but betwen SQ=(SN U QN) and TN; if SQ explains more, then it is preferable, according to your statement 6:
6). Approximate truth of a theory is established solely based on P1, P2, P3,..., PN of a theory
It's not a scientific hypothesis in the same way that scientific methods are not scientific hypotheses. Do you think that we must verify every standard by the standard we are proposing? What we are talking about here is the standard to consider something true. Scientific hypotheses assume that we have already determined that standard prior to using the methods of science. It's a philosophy of science issue.
Woah... it sounds like you agree with me, then ? It sounds like you're saying that we have no scientific reason (and, indeed, we can not have any scienrific reason) to believe that implications exist in the same way that gravity exists.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #38

Post by harvey1 »

Bugmaster wrote:Firstly, I disagree -- in part -- with your statements 2 and 4... The predictions that your theory makes have to be observable, otherwise the theory has no value.
They are observable, that's what predictions P1, P2, P3,..., PN were all about. They were both observables that were confirmed for both S and T.
Bugmaster wrote:So, essentially, the only difference between SN and TN is the internal mechanism. I see no problem with having multiple theories that explain the same phenomena.
Not just mechanism. One might be evolution theory and the other could be IDism. It is always possible to put forth a totally different theory T that gives you predictions P1, P2, P3,..., PN that S gave you.
Bugmaster wrote:Thus, it would be more efficient for us to seek theories that unify other theories -- because these theories would have more explanatory power.
S and T both compete for this quality also. If we interpret to former theories according to T, we are given reasons why T is consistent with those other theories. Now, maybe it requires a change in philosophy if we accept T over S, but S required a change in philosophy too.
Bugmaster wrote:Woah... it sounds like you agree with me, then ? It sounds like you're saying that we have no scientific reason (and, indeed, we can not have any scienrific reason) to believe that implications exist in the same way that gravity exists.
No, I treat science as methodological naturalsm, so I don't see science as metaphysical or about truth.

User avatar
Bugmaster
Site Supporter
Posts: 994
Joined: Wed Sep 07, 2005 7:52 am
Been thanked: 2 times

Post #39

Post by Bugmaster »

harvey1 wrote:They are observable, that's what predictions P1, P2, P3,..., PN were all about. They were both observables that were confirmed for both S and T.
I was referring to this (bolded for clarity):
S1 does not make any new Q predictions that are observable (either too costly, or beyond the range of current observation technology, etc.)

That's an unacceptable excuse. If your theory makes predictions, they should be falsifiable.
Not just mechanism. One might be evolution theory and the other could be IDism. It is always possible to put forth a totally different theory T that gives you predictions P1, P2, P3,..., PN that S gave you.
I don't think I understand how this is possible. As I mentioned above, I see a theory as a box, consisting of a mechanism on the inside, and predictions on the outside. What else is there ? If there is nothing else, then, in order for two theories to not be identical, they'd have to have different mechanisms, different predictions, or both.
Bugmaster wrote:Thus, it would be more efficient for us to seek theories that unify other theories -- because these theories would have more explanatory power.
S and T both compete for this quality also. If we interpret to former theories according to T, we are given reasons why T is consistent with those other theories.
Er, I'm not sure what this meant. Can you give an example ?
No, I treat science as methodological naturalsm, so I don't see science as metaphysical or about truth.
But, originally, you seemed to claim that logical implications do exist in nature. That would put them squarely into the domain of methodological naturalsm, and thus science.

User avatar
harvey1
Prodigy
Posts: 3452
Joined: Fri Nov 26, 2004 2:09 pm
Has thanked: 1 time
Been thanked: 2 times

Post #40

Post by harvey1 »

Bugmaster wrote:That's an unacceptable excuse. If your theory makes predictions, they should be falsifiable.
Sure, they can be falsifiable in principle, but physical limitations prevent S and T to be falsifiable in practice. Perhaps it's because that Texas particle accelerator was never built, or maybe its because the energies of Planck scale are not reproducible on earth, etc.. The important point, though, is that both S and T produce P1, P2, P3,..., PN which is observable. Unfortunately, T is able to also produce P1, P2, P3,..., PN, so in practice S and T cannot be decided on observables alone.
Bugmaster wrote:I don't think I understand how this is possible. As I mentioned above, I see a theory as a box, consisting of a mechanism on the inside, and predictions on the outside. What else is there ? If there is nothing else, then, in order for two theories to not be identical, they'd have to have different mechanisms, different predictions, or both.
Once you know P1, P2, P3,..., PN, Quine-Duhem thesis says that you can construct a theory T that can also predict P1, P2, P3,..., PN. The mechanism of T could be ridiculous but consistent with all of known science. I once "met" a physicist on the net who insisted that his "model of reality" could reproduce every prediction of science. Of course, his model was a severe distortion of how physics is currently understood, but no one could show that his mathematics was incorrect. If I recall, all of his predictions were in common with currently accepted theories, but of course Einstein was way off, etc.. This is, I think, a consequence of Q-D thesis. Feynman even said in some recently published letters that this is the unfortunate fate of science if experiments become too expensive to pursue further. We can all be glad that we won't live long enough to hear such non-sense.
Bugmaster wrote:
harvey1 wrote:S and T both compete for this quality also. If we interpret to former theories according to T, we are given reasons why T is consistent with those other theories.
Er, I'm not sure what this meant. Can you give an example ?
Here's my ole' debating buddy's website. Quine-Duhem thesis lives on...
Bugmaster wrote:But, originally, you seemed to claim that logical implications do exist in nature. That would put them squarely into the domain of methodological naturalsm, and thus science.
No. Logical implications are necessary to tell us metaphysically what is true about the world. From a pure methodological perspective, science is only concerned about which models work to best explain the world and provide predictions that are verifiable. In terms of a metaphysical depiction, I think we need to choose between realism and anti-realism (for example), but no such requirement exists if we stick to a methodological approach to science. We are talking about realism along with what is actually the case, so that discussion is a metaphysical discussion--viz. a philosophy of science discussion. Science as a pure methodological enterprise should remain agnostic about metaphysical issues, although science can produce theories that are more conducive to (or even require) a particular metaphysical outlook.

Post Reply