Plentyoffish dating forums are a place to meet singles and get dating advice or share dating experiences etc. Hopefully you will all have fun meeting singles and try out this online dating thing... Remember that we are the largest free online dating service, so you will never have to pay a dime to meet your soulmate.
     
Show ALL Forums  > Science/philosophy  > IQ is a garbage tool for determining intelligence      Home login  
 AUTHOR
 Apollodorus
Joined: 11/24/2009
Msg: 147
IQ is a garbage tool for determining intelligencePage 9 of 10    (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
Just because you scored high on an IQ test does not mean you are smart, The measure of intelligent s is based how you use that knowledge in real life situations not the amount of information you know.
 scorpiomover
Joined: 4/19/2007
Msg: 148
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/4/2010 5:35:27 AM
RE Msg: 196 by Kardinal Offishall:

Kepler, Newton, Einstein, Maxwell, Dirac, Heisenberg, Noether, and the other great men and women of physics, who opened our eyes to the reality of the universe, all based their work primarily on mathematical grounds. Empirical evidence only confirmed what they first proved mathematically.
The point is that a priori thinking about the natural world is never enough. Ultimately there needs to be some degree of empirical corroboration, even if that corroboration only confirms a segment of a given theoretical edifice.
I rejoiced when I read this. We can come up with any theory we want. But we only know which ones relate to reality, by testing them aginst the evidence shown by reality.

Empiricism has come a long way since the British empiricists. If we followed the type of empiricism espoused by Hume, particle physics, astrophysics, molecular biology, inter alia, would not be possible.
Actually, the reverse was the case.

Electricity and magnetism were subjects that made little sense to us, until Maxwell ignored the evidence, worked mathematically to explain it, and produced Maxwell's field equations which even predicted totally unexpected phenomena like radio waves.

Particle physics was similarly a mystery until the quantum physicists took up working from the first principles of mathematics, and discovered that the universe reflected the mathematics.

Astrophysics was largely confusing to us, until Kepler, Newton and Einstein, all started working with the mathematics from the ground up, and found that the stars reflected the mathematics.

If logic were truly "our greatest teacher," we'd have little use for scientific investigation; philosophy would've long since delivered the goods on the most fundamental questions and much else. It ostensibly hasn't.
You are right. Philosopy uses the logic of discourse, Plato's logic. Mathematics uses the logic of rigorous examination of one's principles, to form a single integrated whole, Socrates' logic.

We cannot ever guarantee our theories are in tune with nature. So we are likely to develop theories that are simply not reflective of reality, and, because of the points of the European Empiricists, which we cannot deny, we are likely to find evidence that supports all of our theories, the wrong as well as the right.

But we can guarantee that nature is mathematically logical, and so those which are mathematically illogical, are not in tune with nature. Thus, by using mathematical Socratic logic, to eliminate our mathematically illogical theores, we guarantee that those theories that are left, are far more likely to be in tune with nature. Of those which are left, we have thought them out so well, that we can define their predictions to extremely exacting conditions. Those conditions are so very specific, that we can easily test for them, and thus guarantee ourselves to say that if they are not in tune with nature. Thus, it acts as an incredibly fast form of extinction of theories that lack good adaptations. All we then need to add is mutations of theories from our imagination, and we have an incredibly fast form of evolution of our theories.

If pure unadulterated logic were really the royal road to knowledge that you take it to be, then the anatomically modern humans of over 100 000 years ago (or, alternatively, humans during the Upper Paleolithic Revolution) should've been able to construct rocket ships and land them on the moon with nothing but their bare brains
I cannot see any reason why they could not have done so using mathematically Socratic logic. Maybe, if 2409 years ago, the world had sided with Socrates over Plato, we'd have been there a lot sooner.

I'm not sure which parenting theories you're referring to here. I, however, am referring to the behavioral genetics of intelligence and personality, which is sound science, based on a consensus of researchers in the field, the only opinions that ultimately count.
You have made it plain that will not even consider any view that is not that already stated by behavioural geneticists. So what you have already read, you have already accepted, and what you have not read, you will not even consider. So there is little point in discussing the issue any more.

It's nice to come across another person who knows what sexual selection is. I think that it's been an overlooked mechanism in evolution, something that Darwin himself was quite big on right from the start (viz. in "The Descent of Man and Selection in Relation to Sex). It’s essentially why, for instance, males and females are different in both body and mind. (I suspect you’ve read Matt Ridley’s book “The Red Queen” or Geoffrey Miller’s book “The Mating Mind”?)
Didn't read either. I simply used mathematically Socratic logic to figure it out for myself. That alone, should tell you something incredibly significant.
 scorpiomover
Joined: 4/19/2007
Msg: 149
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/4/2010 5:39:42 AM
RE Msg: 196 by Kardinal Offishall:
What many people also aren't aware of is that "middle-child syndrome" is actually a predicted outcome of Robert Trivers' elegant and landmark parent-offspring conflict theory, which derives its theoretical and empirical power from the fact that it's intimately grounded in the kin selection revolution in evolutionary biology -- that is, gene-centered neo-Darwinism.

As Trivers showed in the mid-70s, middle-born children are in the most precarious position so far as the resources invested in them by their parents are concerned. And since they only share approximately 50% of their genes with their siblings, it is in their genetic interest to siphon more investment than their parents ideally want to dole out.

(Incidentally, this is also the central reason why tensions between siblings exist.) Hence why they react adaptively to their place in the birth order. This is a cognitive-behavioral adaptation tuned to the contingent fact of where in the sequence of familial births a child finds themselves situated, and something all children have. It's a universal feature of our species-typical cognitive architecture, cued only in the right environmental circumstances.

Behavioral traits such as this one can just as well remain dormant yet still be universal in our species. And it's also worth bearing in mind that birth-order effects are completely consistent with the high heritabilities of personality. In other words, birth-order doesn't erase all genetic effects.

I've only given a sketch of parent-offspring conflict and kin selection, but I can expatiate on it if need be.
I am much interested in learning all views on behaviour, to increase my understanding. But I haven't fully comprehended this point. So I would greatly appreciate it if you would explain this theory in greater detail.

I would also appreciate it greatly, if you would explain all the jargon, like "parent-offspring conflict theory", "kin selection", "cognitive-behavioral adaptation", and "species-typical cognitive architecture", so that in future discussions, I don't come off like an ignorant moron.
 abelian
Joined: 1/12/2008
Msg: 150
IQ is a garbage tool for determining intelligence
Posted: 7/15/2010 8:39:40 AM

Particle physics was similarly a mystery until the quantum physicists took up working from the first principles of mathematics, and discovered that the universe reflected the mathematics.

Huh? Your personal bias against physics and physicists causes you to make statememts which have no connection to historical fact or even the mathematics you claim applies to physical theories.

(1) Quantum mechanics does not solve any mysteries of particle physics, since quantum mechanics only quantizes the classical variables, i.e., it prescribes the canonical replacements of classical variables by hermitian operators, for example:

p -> -ihbar_d/dx and E -> ihbar d/dt

This is called first quantization and the quantization follows immediately from the (well known by physicists of he 19th century) Poisson brackets by replacing:

{p_i, x_j} = -delta_ij

with

[p_i, x_j] = -ihbar delta_ij

If you make those replacements in the classical Hamiltonian, E = (p^2/2m) + V, you get the Schroedinger equation. To obtain particles, you have to quantize fields which rquires quantum field theory and to obtain anti-particles and spin, you need relativistic quantum field theory, since both of those things explicitly depend on a Lorentz invariant spacetime.[1] The particles themselves require quantization of a field (second quantization).

(2) What's even stranger about your comment is that (a) proving the quantum mechanical formalism correct did not result in solving any mysteries apart from justifying the use of things like the Dirac delta function, which was used extensively by Dirac when the theory was developed, but only mathematically justified 30 years later in the context of distributions. (b) Quantum field theory, which does decribe particles has never been made mathematically rigorous. Attempts to do that have come in the form of axiomatic quantum field theory (in particular the Wightman axioms) and in attempts to mathematically justify Feynman path integrals. Neither has been accomplished, yet both predict the physical results. The closest one can get to mathematical justification comes from analytic continuation which turns a Lorentz spacetime into a Euclidean spacetime via the replacement t -> it and then rotating the result back (i.e., Wick rotation).


---------
[1]In particular, the metric tensor is just the anti-commutator of the Dirac matrices and the spin tensor is the commutator of those same matrices. This was also known in 1928 ot possibly earlier by Dirac who obtained a sloution to the energy-momentum-mass relation:

E^2 = p^2 + m^2

by rewriting in the form:

E = a.p + bm

and solving for the coefficients a and b. (a is actually three components, a_i, since p is a three vector and a.p is the scalar produc of a with p.) This is straight forward to solve although it's not very obvious that the a and b must be matricies of at least dimension 4. (Hint: square both sides to obtain conditions on the a_i and b). The Dirac matrices are obtained by multiplying the a_i by b so that b = gamma_0, and the
gamma_i = b a_i. The matrices along with gamma_5 = i times the product of all four matrices form an algebra (the Dirac algebra).
 scorpiomover
Joined: 4/19/2007
Msg: 151
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/16/2010 12:02:32 PM
RE Msg: 206 by abelian:

I haven't yet completed my reply to Kardinal Offishall's points yet, as I wanted to do a good job on it. But I just felt that I had to reply to this, as I let slide your previous post on the difference between mathematics and physics. So I want to make one or two points clear:

Huh? Your personal bias against physics and physicists causes you to make statememts which have no connection to historical fact or even the mathematics you claim applies to physical theories.
Mathematicians and physicists are trained in entirely different ways.

Physicists are trained to examine physical phenomena, and then deduce relationships between them, and then to develop theories to explain them, and hopefully to make predictions. Obviously, we can never 100% guarantee those observed relationships, proposed theories and predictions are definitely true, because we have only a partial view of the data that exists in the universe, and therefore our data set might imply conclusions that are wrong. So there has to a certain amount of latitude in them. So if a physicist says that a certain theory is true, provided that the evidence is overwhelmingly consistent with his theory, and far more so than any other alternative theory, then we are apt to apply that latitude, and say that he is right. That can lead to an over-confidence in one's theories. But that is the price to pay for such a latitude, when dealing with the real world.

Mathematicians are trained to examine logical relationships, and then deduce relationships between them, and then to develop theorems to explain them, and to make predictions about logical statements, based on those theories, which is called pure mathematics. Because those theorems are based on logic, and in logic, we choose the axioms under which that logic applies, we don't need to consider if we have the full picture. So in mathematics, there is no latitude at all. A mathematical theorem is either 100% right, or it's not right at all. So mathematicians are cautious by nature, and they only state a mathematical theorem is true, if they are 100% sure, and even then, other mathematicians would not say that theorem is true, unless other mathematicians have checked out every symbol and statement of the theorem, and confirmed that every single statement, and every symbol, is 100% correct, and if even 1 out of 1 trillion symbols is not correct, then mathematicians regard the whole theorem as nothing more than a hypothesis.

A classic example of this is Fermat's last theorem. Most people, including physicists, called it "Fermat's last theorem", because the evidence was overwhelmingly consistent with the theorem. However, mathematicians called it "Fermat's last conjecture", a hypothesis, simply because it was not proved 100%. Even when Andrew Wiles claimed to have a proof of it, mathematicians still were sceptical that he'd proved it, even though there could not be any doubt that Fermat's last theorem was true.

Another example of this is how mathematicians and physicists approach previously established laws. Physicists are expected to conduct at least some of the experiments to prove those laws, but not all. They only have to learn the laws, and prove some of them. Mathematicians are expected to learn to prove every theory they use, 100%, for themselves, or not to use them at all.

When the two collide, the same things happen. Mathematicians who are applying mathematical principles to laws of physics, are doubly cautious, because they cannot guarantee that those laws of physics are 100% correct, like they can about the mathematical theorems they have proved for themselves. Physicists are far more confident than they would be normally, because they can guarantee those laws of mathematics are 100% correct, or mathematicians wouldn't state they are true, and so they have far more reliable results than if they were reliant solely on laws of physics.

I regard physics as extremely useful, and use them all the time. I have a huge regard and admiration for most physicists, as they have told me huge amounts about the real world.

However, I still realise that everything in mathematics is not allowed to be anything other than 100% correct, while theories of physics are allowed to have a small margin of error, which means that all physical theories are generally in the right direction, but cannot ever be said to be 100% correct. That latitude means that mathemicians tend to be very cautious about anything they say, while physicists are often over-confident when things appear to be a certain way, and that leads to a cognitive bias of physicists, which isn't there in many physicists, but is there in quite a few, who aren't taking the much more rigorous approach that mathematicians would, particularly in the area they are most confident in, that of developing theories about the real world.


Particle physics was similarly a mystery until the quantum physicists took up working from the first principles of mathematics, and discovered that the universe reflected the mathematics.
(1) Quantum mechanics does not solve any mysteries of particle physics, since quantum mechanics only quantizes the classical variables, i.e., it prescribes the canonical replacements of classical variables by hermitian operators, for example:

p -> -ihbar_d/dx and E -> ihbar d/dt

This is called first quantization and the quantization follows immediately from the (well known by physicists of he 19th century) Poisson brackets by replacing:

{p_i, x_j} = -delta_ij

with

[p_i, x_j] = -ihbar delta_ij

If you make those replacements in the classical Hamiltonian, E = (p^2/2m) + V, you get the Schroedinger equation. To obtain particles, you have to quantize fields which rquires quantum field theory and to obtain anti-particles and spin, you need relativistic quantum field theory, since both of those things explicitly depend on a Lorentz invariant spacetime.[1] The particles themselves require quantization of a field (second quantization).
All of those required reification of physical observed phenomena into mathematical symbols, and then to use mathematical theorems to deduce conclusions about them, like the Schrödinger equation.

(2) What's even stranger about your comment is that (a) proving the quantum mechanical formalism correct did not result in solving any mysteries apart from justifying the use of things like the Dirac delta function, which was used extensively by Dirac when the theory was developed, but only mathematically justified 30 years later in the context of distributions.
So what you are saying is that quantum mechanics never gave us ANY conclusions about the real world? None? That's quite surprising to hear. Perhaps you can quote Richard Feynman, Niels Bohr, Murray Gell-Mann, Ernest Rutherford, or some other famous particle or quantum physicist explainin exactly that.

(b) Quantum field theory, which does decribe particles has never been made mathematically rigorous. Attempts to do that have come in the form of axiomatic quantum field theory (in particular the Wightman axioms) and in attempts to mathematically justify Feynman path integrals.
Mathematics describes the logical world. Mathematical modelling describes how one might apply those theorems to the real world, like statistics models probability theory. However, there is always a problem applying mathematics to physics, because while the mathematical theorems can be said to be 100% correct, the observed relationships and theories of physics cannot. So we can never truly expect to make physics totally mathematical.

We can only apply what we know for sure from mathematics, to the physical world, to thus state that if certain physical observations are 100% true, that match the mathematical axioms of a certain mathematical theorem, then the conclusions of those theorems must be 100% true as well. If we find those conclusions are not 100% true in all situations, then one or more of those observations that matched those axioms were wrong, and our observations were incomplete and premature.

An example of this is Noether's theorem, which you pointed out to me, that certain symmetries, coupled with the other axioms of the theorem, lead to the conclusion that certain conservation laws must be true. The reliability of those laws of conservation depend on how much those symmetries that we observed are true, coupled with how much the other axioms of Noether's theorem are true. If one or other of those conservation laws are found to not operate all the time, or only work to 1 trillion trillion decimal places, then those symmetries are not 100% true, or one of our observations that match the axioms of Noether's theorem are not 100% true.

But as I said, physicists are given a lot of latitude, while mathematicians are given none, so mathematicians do everything they can to never make unproven assumptions, while physicists are allowed to make unproven assumptions all the time. So physicists simply don't have the requirement to think in nearly as much detail and level of accuracy as mathematicians have, and perceived necessity is the mother of invention.

Also, I can see that you like to use a lot of equations. I too, can use equations. After all, as a mathematician, it's what I use all the time. However, when speaking to others, I prefer to convert my equations into more readable matter for those who don't quite know the mathematics as well as I do. That allows others to understand what I am saying for themselves, free of jargon, and that allows them to point out places where I might be wrong.

I simply cannot take any view of your writing on such equations, as I cannot say if it is right or wrong, and my mathematical training has taught me to never blindly accept the word of anyone about anything I cannot prove to myself, just because they sound clever due to using jargon.

I do at some point want to study relativity and quantum physics in detail, as I find it extremely interesting. But when I do that, it will be for the sake of relativity and quantum physics, not because I want to prove some point on an internet forum.
 scorpiomover
Joined: 4/19/2007
Msg: 152
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/18/2010 6:37:35 PM
RE Msg: 202 by Kardinal Offishall:
Perhaps I should elaborate on what I was implying by saying that empiricism has come a long way since the British Empiricists, and why fields like particle physics, astrophysics, and molecular biology would be intractable under the auspices of empiricism as originally formulated by Hume.

By this I mean that our concept of observation -- and what counts as a valid observation -- has changed radically since the days of Locke and Hume. For instance, we know about phenomena like stellar fusion, neural activity, and subatomic particles in an indirect manner.

That is, phenomena and entities like these are not known on the basis of “direct” observation, where by direct observation is meant looking and seeing with our eyes or through a telescope, microscope or corrective lens, the latter of which still implicate vision in a rather intimate sense.
Actually, that doesn't fit, because Hume was a historian, and wrote on science, that would have been read by the well-educated rich British men of his time. Living in the 18th Century, many well-educated and rich British men had a classical education, which included the history of science. So if he would not have studied the history of science, he would have written something that didn't fit at all with that, and been lambasted as an idiot. He already got lambasted for being an atheist, but was generally considered very smart. So we know that he had to have had an extensive knowledge of the history of science, particularly about something like Heliocentrism, as that was a hot debate for over 1000 years, until Newton's time, only a century before his, and whose results were still being confirmed in Hume's time.

So he would have known that the notion of Heliocentrism was only accepted based on the work of telescopic observation, and that the Heliocentric behaviour of the planets goes against the normal "direct" observations, and can only be gleaned by a markedly advanced level of observational skill, that combines the most exacting observations with the ability to consider multiple perspectives, which far surpasses even our abilities to observe stellar fusion, and neural activity.

Of course, if science had stuck to the strictures put forth by Hume, such phenomena would forever be beyond our epistemic reach.
That would only be true if we presumed that Hume was not a historian and had thus not known anything about Newton's work, other than what most people think they know about the history of Heliocentrism.

I do not deny that mathematics is unimportant in science -- far from it. My central point was that a priori reasoning and mathematics, taken by themselves, and without any empirical constraints, do not justify belief in any real-world phenomena. There must be some form of empirical corroboration to vindicate a theory. But we both agree on this point.
Of course they don't. Mathematical theorems all follow one basic principle: A => B, where A are the axioms and B is the conclusions. There are no mathematical theorems that don't follow this method. Whether or not a particular mathematical theorem addresses a situation, is simply a question of whether or not empirical observations can confirm the axioms of that theorem are true for that situation, in which case, the conclusion of the theorem must be true.

I'm going to explain a bit more, here. I tried to reply to your other points about empiricism, and got bogged down. I'd rather just make my points clear.

Many people get confused about rationalism, empiricism, and mathematics. They ALL rely on logic, and they ALL rely on evidence. You cannot work any theories, predictions or relationships out without some form of logic. However, you have no proof that any of them apply to your situation without evidence. So what is the difference between them?

Well, it's a question of priority.
1) Rationalists say that you start with an argument, and then you see if the empirical evidence fits your argument. The Rationalist credo is "everything must be rational".
2) Empiricists say that you start with the evidence, and then you come up with a theory to fit your evidence. The Empiricist credo is "everything must match reality".
3) Mathematicians say you do both, but much more accurately than anyone else would even dare to try. The mathematician's credo is "everything must match everything, exactly, or it isn't right".

1) Mathematicians work on the rationalist side, by developing models with certain axioms, just to explore. They then deduce conclusions based on those models, which are called theorems, propositions, lemmas, et al. They then explore those models further and further, playing with the axioms, and the theorems, until they have an extremely inclusive model, that takes into account anything that makes an intuitive sense to be part of the model. They refine the model, again and again, until it becomes an inclusive but exact whole, whose axioms lead us to lots of important conclusions. That system is called a mathematical theory. An example of such a theory is called calculus.

2) Mathematicians work on the empirical side, by examining all the evidence minutely. Mathematicians look to see if the evidence almost exactly matches the axioms of a known developed model, because then we can apply all the theorems of the model easily. If no such obvious pattern is found, then they look for patterns, to see if they can find some, with the determination to find patterns that are mathematically accurate, namely, accurate to the same degrees of accuracy as the highest degree of accuracy as the measurements. They then either check again for a matching model, or decide to spend the time to develop a model that does fit the patterns found.

They go back and forth, developing model after model, and matching experiments to their axioms, until they have a model that fits all the data that we have, to the accuracy that we have, and that gives us extremely useful answers about the data.

The process is incredibly painstaking. Johannes Kepler, who was a mathematician, wrote that he reviewed Tycho Brahe's data 70 times, and Tycho Brahe's data was far more accurate, far more detailed, and far more repetitive, than anything else by far. It was Kepler's Laws of Planetary Motion that were the basis of Newton's work. Kepler did the empirical mathematics to look for patterns in exacting detail, and seeing if they matched any known models, until he hit upon the idea of an ellipse, with the Sun at one of the focal points of the ellipse. Newton then did the rationalist mathematics, by developing the model of calculus, and developing the model of planetary mechanics with the axioms of his three laws of motion and the law of gravitation, and then applying the model to the orbits of the planets.

It might seem to rationalists that they are describing rationist behaviour, or empiricist behaviour to empiricists. But they aren't really doing either. They literally don't care about either. They just see both as sources of information to refine until they have the possible answers that almost exactly match the data and that are consistently logical.

Also, when I said "you go back to scratch", I didn't mean we wipe everything from our memories. But rather, like solving a puzzle. Sometimes, you fit piece after piece, and then find that a piece won't fit, no matter what you do. But you know they all do. So one piece is fitted in the wrong place. But you don't know which one. Often, a very quick solution, is to take the whole puzzle apart, and re-add each piece, all over again. You are still much faster than the first time, because you know why you put each piece where you did, and you can recall that knowledge quickly. But that process is useful, because it requires you to re-examine each step, and you often find that there is suddenly another way that one piece will go. So you try both approaches, and it quickly becomes clear which approach is the right one to solve the puzzle, even before you get to the piece that doesn't fit. Then the piece fits, perfectly, without you having to deal with the problem all over again.

It's an approach that is quite easy and useful for mathematics, because it allows you to check one's work again and again, rather like repeating an experiment when it comes to empirical experimentation.

But quantum physicists were steeped in experiment. There was a direct reciprocal relationship between theory and experiment. This is a far cry from sitting in the arm chair and deriving quantum mechanics from first principles.
However, those results were the important ones. A scientist who claims to have found many things, but whose experiments cannot be repeated, is not giving us anything reliable to work with. But even a single experiment, that shows the same results, time after time, for everyone, teaches us an important lesson about reality, and it is THOSE lessons that have to be integrated into our understanding of the physical nature of reality.

Even string theory cannot be said to be pure and unadulterated theory stemming from first principles. There are constraints with which string theorists work with, constraints provided by other theoretical considerations, which are themselves grounded empirically.
Of course. But again, that's not because the mathematics is wrong. It's because you have not selected a model whose axioms are confirmed by your observations.

Of course, in times of “crisis,” where a pre-existing and heretofore dominant paradigm is consistently unable to account for new experimental results, potential theoretical successors are proposed.

But these potential successors do not simply scrap all extant theories and start anew. The key issue revolves around the “ceteris paribus” clause (the “all else being equal” stipulation).

In times of theoretical crisis, successor theories grapple with how much else to “hold equal” -- that is, the extent to which we ought to hold constant the rest of our theoretical knowledge, broadly construed, and how much of it we ought to call into question and attempt to replace.
The problem with much of physics, and much else of our understanding of reality, even art, is that most experiments either confirmed what we already thought, or they displayed unexpected results, time after time, but such results that they could be married with the existing theory, and that caused a massive paradigm shift in thinking. However, when you encounter those types of results, you simply cannot pull a theory out of thin air. So you try to work with what you know, "ceteris paribus". But invariably, ceteris paribus doesn't work at all in those cases. Why not?

Well, because when we come up with any theory, it is the product of our minds, which as Locke pointed out, makes certain assumptions about reality. Those underlying assumptions are developed when we are young, and we are able to develop their conclusions over time, by subconscious thinking. So we already have an incredibly detailed model of reality, whose axioms are our underlying assumptions. Any theory that we believe in, based on empirical observations, is actually just an application of that model to our empirical observations, and we can come up with them quite easily, and adjust them easily, because we are familiar with the model. As a result, if any new observations are found, that are consistent with our underlying assumptions, they are consistent with our model, and we can easily understand them, even if they disagree with our current theories. So it is quite easy to adjust any theory we have to fit those new observations.

Normally, there are things that we can expect will be a likely factor in certain phenomena, and things that are probably not a factor. How do we know this? It's not indicated by our theory. We know this, because it's indicated by our model. Ceteris paribus, the null hypothesis, just means that in our model, those factors should not affect the phenomena we are currently investigating.

However, if we discover any observation that contradicts our underlying assumptions, then that contradicts our model. We now have to develop a new model in our own heads, and it will take some time before we are familiar enough with that new model, to be able to use it. That is why we experience a sense of loss and a sense of confusion with a paradigm shift. Such a shift is not a problem for our theories. It is a problem for US, because it requires changing our underlying assumptions of the world. So we are forced to start from scratch, to re-develop our model.

However, when we do this, we can no longer rely on our old model. We can only rely on ceteris paribus because our brains tell us that those things are not a factor in our model, and we are simply not familiar enough with our new model to state with any certainty if those things are a factor or not.

We cannot even use any of our old theories, because we are unable to state how they would look in our new model. But we know that we still did those experiments, and got those results.

So, we go back to scratch. Our subconscious slowly pulls piece after piece out of our memory, and then slots it into the new model. Sometimes they stay the same, sometimes they look wholly differently in our new model, in which case, we'll have a new theory for that phenomena. But over time, we slot more and more pieces in, and then we have the new model. Then those theories which have changed, become the new discoveries. We can then say with some certainty what might and might not be a factor of a particular phenomena, and we have thus regained the power of ceteris paribus.

(All these points tie into fundamental issues in the philosophy of science, such as the Quine-Duhem thesis, etc., which I won’t get into right now.)
The Quine-Duhem thesis is actually central to this, because it shows that we shouldn't be able to do ceteris paribus. But we do. The reason is that we don't base our ideas on our theories, but on our model, and our model is updated periodically via paradigm shifts. In the meantime, we assume they are right, because if we don't, we really don't have anything. With our model, we subconsciously evaluate what we believe to be a possible factor. Those things we test for. The rest we don't, and those with no chance of being a factor in our model, we regard as ridiculous.

The history of science demonstrates that, in periods of revolutionary science -- times in which paradigm changes occur -- predecessor theories are not always done away with at a wholesale level.

For example, much of the underlying mathematical core of predecessor theories are retained in their replacements. In the case of electromagnetism, Maxwell’s theory of light in fact retained some of the mathematics of Fresnel’s equations -- specifically the mathematics describing the propagation of light at right angles to the direction of incidence.

So rather than Maxwell working in a “vacuum,” as it were -- from mathematical first principles alone -- he was both directly working with and building on what he aimed to replace. This is a historiographical point that is often missed in these sorts of discussions -- though it is known in the history of physics and philosophy of science literatures.
Of course Maxwell wasn't working in an intellectual vacuum. Well, a logician would say that, because without ANY evidence, he'd have had to repeat every previous significant experiment by sheer chance, and the probability that he just discovered them all by chance, when they were discovered by lots of work, over centuries, is unbelievably low, and rather pointless, because those experiments and their results were known and repeatable, and that didn't change, no matter what theory you used.

However, in a paradigm shift, the inner model has to change, and some theories will change their form to be understood in terms of the new model. But, in order to do that, you cannot just copy them into the new model. They have to be pulled in and re-evaluated in terms of the new perspective.

It's rather like you entering a building, alone, and a power-cut happens, and you cannot find any candles or torches. You looked at the room visually. But now you have to examine and navigate the same room with your eyes closed. You will feel the same objects you saw. But you will interpret them entirely differently. It's incredibly painful when you go into your room, as you constantly misinterpret your data, and you bump into lots of things, and hurt yourself. But eventually, you realise that you have to treat things differently. You cannot "expect" your touch data to match your visual data. You have to slowly feel each object, and then run through all the objects you know you've seen, thinking which one "fits". You have to start with the touch data first, as your primary source of experience, and rely on your visual data, only as a secondary indicator of what things might be like. It's quite important, because when the lights come back on, you often discover that a lot of things you thought were in one place, were in another place entirely, and there were things that you touched quite often, but didn't notice at all visually with your eyes.

The model of touch experience is different than the model of visual experience. So the same objects completely transform to look entirely differently. So you have to start from scratch, and examine each object from scratch, but with the advantage that you already know what most of the objects look like.

What is very interesting about the last 500 years, is that this process was usually slow and laborious, and many things were misunderstood, again and again. However, those few cases when a scientist or other student of the world decided to take all our theories and observations on a subject, and turn them into a mathematical set of principles, that they seem to have been over 1000 times more accurate than normally, and have made wild predictions that have nearly always turned out to be spot on. It's been very, very rarely done. But every time it has been done, it's completely revolutionised the field.

My earlier explanation about the idea of the brain using models to develop theories explains why. Models are what we use in mathematics. But unlike the models our subconscious develops, mathematicians develop many models, and write down all the theories for each model. Switching a model in your subconscious requires years of development. But switching a model in mathematics becomes as easy as changing hats. It's even more easy for mathematicians, because they change models all day long. So suddenly, no problems with paradigm shifts at all. If you are Einstein, or Maxwell, or even Kepler or Newton, all you need to do is find the right model, or look for the patterns, and then develop models with those patterns. It's still not easy, because as a mathematician, you're looking for painstaking accuracy. But you aren't guessing all that much anymore, either. You're going as far as any scientist can go, directly to the model. Of course, you could come up with a model of models. But that's yet to be done.

However, what is often missed, that most people fail to understand, is that mathematicians think far, far deeper into any problem, that most people need, or is worthwhile, 99% of the time, in most practical questions. Mathematicians do not deal with the mundane. Even a theory of physics is just a single phenomena, a single variable for consideration. Mathematicians consider wide generalities, making the whole of space-time and reality their playground, with all the empirical evidence of human recorded knowledge as just constants to be manipulated and used to solve complex equations.

Mathematicians think into problems far more deeply than even physicists do, not into the nature of just the physical universe, but in what assumptions are required for that physical universe to behave as it does.


I cannot see any reason why they could not have done so using mathematically Socratic logic. Maybe, if 2409 years ago, the world had sided with Socrates over Plato, we'd have been there a lot sooner.
I posed whether you thought that the earliest anatomically modern humans or humans during the Upper Paleolithic Revolution (approximately 40 000 years ago) would have been able to land on the moon using strictly a priori reasoning.
But I was never talking about reasoning without checking which of one's models apply using empirical evidence to do so. Mathematicians prove for some A, B, that A => B. Physicists say A is true. So the result is that if the physicists are right, then B must be true. The only issue remains to confirm if A is true, and if B is true. With mathematics, one can develop so many theorems from a single model, that it becomes quite easy to confirm any model, using a multitude of choices of experiments.

Again, you are assuming that I hold an entirely different position from the mathematical approach.

Technological applications of science depend on so much more than just our theoretical apprehensions. But this is actually a very deep question. Many historians of science have argued that everything that precipitated out of the so-called scientific revolution, technological applications included, was radically contingent on cultural and historical factors.

(In fact, many historians of science refer to the scientific revolution in quotations because they doubt whether it can actually be properly called a revolution, rather than just part of a continual intellectual development that extends beyond just 16th and 17th century Europe, with roots in ancient Greek philosophy.)

Subtract these jointly necessary conditions (the historical and cultural factors) and the so-called scientific revolution does not materialize, and so too the theoretical basis for technologies that would eventually place humankind on the moon. A fortiori, the earliest humans could not have done so (could not have landed on the moon), owing to the lack of these necessary conditions.
Again, the cultural and historical factors merely gave scientists the experiences that led them to think along the lines of certain models. Mathematics does not need those cultural and historical factors, because it is about the study of those models themselves.


You have made it plain that will not even consider any view that is not that already stated by behavioural geneticists. So what you have already read, you have already accepted, and what you have not read, you will not even consider. So there is little point in discussing the issue any more.
I legitimately do not know which parenting theories you were specifically referring to, as you didn’t mention them. I could very well agree with you if you do list them however, as many if not most of the parenting theories of yesteryear are claptrap.
I was not aware that previous parenting theories from the 60s onwards, were now considered claptrap, because they have formed the basis of parenting for most parents in the West since the 60s onwards, and they have been enshrined in law, at least, in the UK, and I believe in other Western countries as well. So I find it a bit odd for you to say that, when the scientific community has not come out and made any implication to parents that they were wrong.

And it is not that I will not consider any views apart from those endorsed by the behavior genetics community. It's that I see no reason to believe in postulates that have already been adequately tested and refuted. And neither should you.
I have to, because scientists have put forth ideas, then rejected them, and then changed their minds yet again. I can think of the recent statement about eggs, for one. First, scientists said they were good for you, because they were a source of protein. Then having more than 2-3 a week was not good, because of cholesterol. Now, scientists say that you can have as many eggs as you want, as long as they aren't fried eggs. When scientists change their minds like that, it means that I can never take what they say as gospel, and I have to consider for myself if they have considered all the factors. I do that by analysis of the way those statements are made, to indicate if they have considered all the factors that other areas of science have shown could be a factor. In the case of eggs, I knew that eggs were not good for you if they were fried already, because it was well-known that fried food was incredibly bad for you if taken regularly. So it was already clear to me that eggs were probably fine for you, if they weren't fried. By the by, I have reasons for believing that this new statement will be revised again in the future. This isn't the only case. But it is a nice, clear, recent one. So I do have some empirical ground to suggest that scientists only speak about what they've tested for, which is based on their hypotheses, which are based on the cultural and historical factors they were raised with, but that isn't the whole picture, not by a long chalk.


Didn't read either. I simply used mathematically Socratic logic to figure it out for myself. That alone, should tell you something incredibly significant.
How did you come up with the idea of sexual selection then? Surely not from first principles whilst sitting in the arm chair. (And by first principles I literally mean first principles, viz. starting completely from scratch, forgetting everything you know about evolution and everything else, then working your way back up using nothing but a priori reasoning, or "mathematically Socratic logic," as you so eloquently put it.)
Simple. Remember, I'm working mathematically, not anti-empirically. My father has blue eyes, and tanned skin. My mother has green eyes, and light skin. My older brother and I has blue eyes and light skin. My sister and younger brother have green eyes and tanned skin. None of us could have inherited both qualities from a single parent. So each child inherited some qualities from one parent, and some from another. Plenty of other families have similar characteristics, in height, build, hair colour, and plenty of other characteristics. One simply needs to observe, that in families, there is some difference in the children, where the boys are not clones of their father, and the girls clones of their mother, and that even in the same family, there is almost always a difference between each child, except in monozygotic twins, that show that each child inherits a unique set of attributes, some that match some of the father's attributes, and some that match some from the mother's attributes. I knew of genetics before. But as my mathematical training required me to prove every theorem that I learned, no matter how well-known it was, I naturally check things for myself, and this shows sexual selection. However, it was my mathematical training that taught me to think into the problem of how such a dynamic process would work over time, that led me to my conclusions.

I don’t mean to be facetious here, but, the fact that you needed -- indeed absolutely required -- a posteriori content to have the very idea of sexual selection should tell you something incredibly significant.
Again, you are speaking as if you are talking to a non-mathematical non-empiricist. Your very concept of a distinction between a posteriori content and a priori reasoning is an entire fallacy of thought. Think about it:

What rationalist is born blind, deaf, dumb, and without the ability to feel, sense heat, feel pain, or smell, and is able to communicate their ideas, to anyone? They could not, because they would not know what sounds you would associate with the concepts they wish to express. But each second, one receives a huge amount of data from one's senses, and over twenty years, that's millions of pieces of empirical evidence. So ALL rationalists must, by being human, and expressing their idea, have to have huge amounts of empirical evidence.

What empiricist is born without the ability to reason out thoughts, without a brain? 2+1=3 is a result of reason from 1+1=2 and 1+1+1=3. Even the ability to walk requires exact proportions of tension from each muscle to co-ordinate the various bones of the body, to walk upright and not fall over. It is not biological, because it doesn't come automatically. It is learned over months, which is why we lay, then crawl, then fall over a lot, and finally learn to walk unaided. It's a mental process. However, it's far too complex to be expressed in an easy relationship that can be observed by simple empirical observation. It also would mean that when we learn to run, we would fall over just as much, as it's a different set of rules for running than for walking. But we have adapted, because we generalise the rules into a complex theory of mechanics, by reasoning. We can observe so many cases in which our senses and abilities learn extremely complex tasks, and because we do not learn in a straight-line fashion, but in plateaus of understanding, that it is just not feasible to suggest that we just operate by empirical evidence alone. We're born with an extremely high-level rationalist subconscious, and we use that ability every minute of every day, for most of our lives, just much of it we do subconsciously.

We humans are born with rationalist and empiricist parts, both. To deny one or the other seems ludicrous to me. Not to say that you are. But I never really understood why anyone would take one position or the other. It's like deciding to vote firmly for a party, without knowing if their policies are things that you agree with overall, just because they are supposed to be a party that votes in your interest.


I am much interested in learning all views on behaviour, to increase my understanding. But I haven't fully comprehended this point. So I would greatly appreciate it if you would explain this theory in greater detail.

I would also appreciate it greatly, if you would explain all the jargon, like "parent-offspring conflict theory", "kin selection", "cognitive-behavioral adaptation", and "species-typical cognitive architecture", so that in future discussions, I don't come off like an ignorant moron.
Not a problem. The fact that you already have a good grasp of evolution is to your credit.

Kin-selection is a central tenet in modern evolutionary biology. It essentially forms a cornerstone of sociobiology, the guiding paradigm of ethology, the science of animal behavior.

It was developed by the British evolutionary biologist W. D. Hamilton in the 60s and is a direct implication of neo-Darwinism (Neo-Darwinism being the synthesis of Darwinian evolution and Mendelian genetics developed in the 1930s by figures such as Sewall Wright, R. A. Fisher, and J. B. S. Haldane.)

Kin-selection’s mathematical basis can be described verbally as altruistic acts directed toward biological relatives. Accordingly, an altruistic act will benefit, and will be made by, the altruist so long as the cost to the altruist is less than the benefit to the recipient, weighted by the degree of relatedness.

The degree of relatedness between two kin is important, as the odds that a recipient of an altruistic act also possesses the genes giving rise to the altruistic act tails off in proportion to the degree of relatedness.
I can see something in that, in that someone I knew, said that he found that as his daughters got to be 15, he felt far less protective towards them, which suggests that there is a high non-rational level of protection provided by parents to their offspring.

That is, the odds that your sister also has the genes subserving your altruistic acts is greater than the odds that your cousin does. Hence, you should be more willing to perform costlier altruistic acts to your sister than you would your cousin, all things being equal.
That doesn't work for me, because it has happened that kids were switched at birth, and so far, I never heard anyone ever suggest that the parents cottoned on. They would have, if they'd had a biological sniffer for genetics, and without that, they'd treat all their children the same.

So I'd be inclined to say that we have a general psychological desire to ensure that we live on in our offspring, and that in most cases, that offspring is our genetic product, but in many cases, it's not.

I'd also be inclined to point out that in family businesses, some business owners have a layabout son, and an employee about the same age, who embodies the ethos and values of the business owner, and who tries to learn the business owner's ways, and wants to carry on the business just as the owner did. The employee is carrying the memes of the business owner, and the son is carrying the genes of the business owner. Often, the business goes to the son. But it does also quite often go to the employee. This suggests that it's more of a psychological effect, that is dependent on how much the owner values his memes over his genes.

Ultimately, genes for altruism impel an organism -- such as a human -- to benefit those other vehicles (those other humans) that are likely to carry copies of the same genes for altruism, and hence increase their chances of being replicated in the next generation via increasing the survival and reproductive prospects of their carriers.
I'm really unsure what the new idea is here, that parents look after their offspring? You can observe that in almost every family, to some extent, even the abusive ones, and I'm fairly sure that you can observe that in practically every species. It's not new.

But to suggest that it has a genetic basis, well, to say that, I'd have to at least have some kind of proof, that cannot be explained by a simple case of psychological homology and parental basis.

The work of Hamilton ushered in what has become known as the “inclusive fitness revolution” in evolutionary biology, as it delineates what is biologically -- that is, evolutionarily -- possible.

Distilled to its fundamentals, it essentially states that all organisms act in ways that maximize the replication of their genes -- that is, to preferentially treat their own survival and reproductive prospects over all other organisms, but also other organisms carrying copies of genes for kin-directed altruism wherever the trait for kin-directed altruism has evolved.
I'm not so sure that they always do. I said so before. I know that some parents treat their offspring horribly. I've also met people who looked like friends enough to be siblings. But I've never seen their parents even think of adopting them, or treating them better or worse than any other friend of their child.

There is much more to be said of parent-offspring conflict. The evolutionary biologist and geneticist David Haig has shown that it occurs as early as in the womb. His work has shown that a co-evolutionary arms race has occurred between mother and fetus.

Because a fetus only possesses 50% of its mother’s genes, it would pay the fetus to drain its mother of more nutrients than the mother -- more specifically the mother’s body -- would otherwise want to provision.

At the same time, the fetus carries 50% of its father’s genes, and there are no guarantees that the mother’s future reproductive efforts will overlap with the father’s. Hence, much of the fetus’ efforts benefit the paternal genes it carries to the detriment of the mother. (Yes, this is all very, very twisted!)

Haig has shown that fetuses have evolved to churn out a hormone called human placental lactogen (hPl) to absorb the insulin within its mother’s body. Why has the fetus evolved this?

Because the fetus gains from any increase in resources funneled to it from its mother, in this case from blood sugar. The mother, on the other hand, has evolved to respond to this attempt at domineering her blood sugar supply by increasing her production of insulin, which acts to absorb the excess blood sugar that would otherwise funnel into the fetus, which from her perspective casts a net decrease on her current and future reproductive fitness -- the amount of nutrients she can invest in other current children as well as future ones.

Haig has shown that the amount of hPl (the hormone secreted by the fetus) is on the order of 1000 to 2000 times the level of other hormones in the mother’s body -- a clear hallmark of an antagonistic co-evolutionary arms race.
There is a very good reason for any foetus developing such a hormone. The foetus is an attachment to the body, not an integrated part, like the liver. The blood supply goes through the womb. So it's calls for glucose are biologically second in demand to the mother's system. However, the foetus itself is growing. Protein-building requires a tremendous amount of energy. Ask any body-builder. They eat 6000 calories, just not to lose the muscle they have. Here, you are building an entire body. So, it needs tremendous amounts of energy to build the proteins to make that body. So it needs to make a large demand on the system, much higher than normal. However, the effects of blood sugar means that blood sugar acts as a hormone itself. If you raise the levels of blood sugar, by eating a lot of pure sugar, then every cell's intake of sugar increases, and the whole metabolism rises for a short time, until the body burns up the blood sugar. That process is called a "sugar high". If the foetus were able to make a more normal demand for increased intake, the mother would get a sugar high. Not only would the foetus get only a small portion of that, but during the high, the metabolism of the mother would rise sharply, which would be none too good for the foetus. No, it needs a way to siphon off far more of the mother's resources than the other organs. If it doesn't get that, it won't grow properly, and such foetuses would either be miscarry, stillborn, or be born deformed.

The other option is for it to ensure that it's mother eats huge amounts of carbs. That would give it enough calories without the sugar high. But that would make the mother obese, which severely decreases the chances that the baby will come out healthy and alive. Also, any such mother would now have a habit of obesity, and obese mothers make obese babies, and obese babies suffer with diabetes, heart disease, and a lot more. So those genes would not survive either.

So it's pretty much forced to adjust the hormone levels in its favour.

That might seem like an evolutionary arms race. But considering that doctors have found that all a mother really needs to ensure her baby gets enough, is an extra small sandwich a day during the last few months before birth, that's not even worth considering as a disadvantage. For most mothers, that's not even enough to help them lose a bit of the weight that most women already have more than men, and they wouldn't mind losing anyway.

Indeed, nature is insidious in some quite opaque ways. And I’ve only provided a condensed overview of these issues. It gets worse! These explanations, as well as others, are the fundamental basis of social psychology, yet they are seldom taught as such in social psychology courses. (Long story.)
I'd love to hear some more, because so far, the one that you've presented, I would consider insidious, if it was NOT there.

As for cognitive adaptations, an easy way to understand what they are is to in a sense think of them in terms of software programs. This notion is central to evolutionary psychology, which posits the existence of a multitude of such innate (in-born) programs, which in turn possess a genetic basis.
Unfortunately, this is impossible. Even if you used every DNA base-pair for one program, you'd get about 40Mb, which would never be enough to even process language, let alone audio or visual data.

The idea is that such mental programs evolved because they increased the survival and reproduction of our human ancestors on the African savanna during the Pleistocene epoch.
Again, not really possible. The number of possible variations in programs that are not detected except by physical testing, is billions per program. Each program would have to randomly occur in a billion separate humans before one would get it right enough for it to be a useful adaptation. The rest would find it such a cognitive disadvantage, that many could easily end up with the human equivalent of the BSOD (Blue Screen of Death - one day, you try to speak, and you have a stroke, and die). Still many more would find that it would slow them up enough to get eaten by lions. Programming is such a difficult job, that the very idea that you could evolve even a very simple program by genetics, is quite unfeasible. Sexual selection would be even worse. Splicing 2 programs together is worse than a cut-and-shut. It's programming suicide, unless it's done by a programmer.

The rest of this is quite impossible. It's like an evolutionary biologist came up with this, who had never had to work as a programmer in the working world. In the lab, sure, it's easy to think this is true. But the lab is nothing like the African savanna. The working world is, because if a company finds a fault in a program, the programmer normall gets the blame, even when it isn't his fault at all. So as a programmer in the working world, you experience the possibility of job extinction all the time. You kill off the code that is likely to get you fired, which is so much of it, that no species would ever survive.

There might be other models that might fit the way that humans do things, and that could fit in with evolution and with the limitations from our genes. But this one definitely isn't one of them.

Can I ask, which of the evolutionary psychologists that came up with theory, had extensive experience of multiple computer systems and computer languages, and had used a good few of them commercially, in the working world, and understood from practical experience, what things were possible, and what things were just not even worth considering? Or should I just assume that none of them were?

Two examples of evolved software programs in humans: our “instinctual” sense of physics -- particularly as demonstrated in very young infants
I would say that it's not instinctual, because I've seen a baby surprised that things drop to the ground, go through a learning process. Also, I once heard of an experiment when students were given glasses that turned their vision upside-down. Their brains took a whole 3 days to correct themselves. If physics was "instinctual", such a slow learning process would be almost instantaeous. I wouldn't even say that basic maths, the undepinning of physics, is instinctual, based on certain books I've read, that taught maths to uneducated Eastern Europeans, written 200 years ago, by someone who clearly didn't learn it from school, but by figuring it out himself. A real mind-bender for someone taught maths in school.

But I would say that the human mind has an incredible ability to learn things to be so habitual, that we regard it as instinctual, when it's really learned behaviour.

-- and our preference for symmetrical faces in opposite-sex romantic partners (though that is not the only thing a preference for symmetry is implicated in).
If that was true, then symmetrically deformed faces would be preferable to non-symmetric but undeformed faces.

As for “species-typical cognitive architecture”…this is just the way evolutionary psychologists refer to human nature. In other words, the innate psychological aspects which all humans share -- even if they aren’t phenotypically expressed -- such as the capacity for language, a moral sense, anti-cuckold adaptations in men, and so on and so forth.
That explains it. Architecture means an entirely different thing in computing. It means the basic design under which all programs that work on those computers must be executed.

Note that the ostensible variance between individuals in innate intelligence (and perhaps groups as well) falls outside the bounds of what can properly be called “species-typical” -- that is, found in all humans.
Using the concept of cognitive architecture found in IT, it makes a lot of sense.

The reason why this is is still largely a puzzle, but there have been some intriguing explanations proffered recently. Perhaps I can adumbrate said explanations if anyone is curious.
You can, and I'd like to hear more.

Also, do bear in mind that I have given a very streamlined account of these concepts, so I’ve skipped over many finer points. But this is inevitable when giving a condensed explanation of such things.
I realise that. However, the mere account that you have given, is so indicative of a lack of understanding of the real problems within computing programs, as opposed to what non-experienced laymen think are the problems, indicates a lack of understanding of how any computing system could ever hope to work.

These topics require a horrendous amount of background knowledge to fully appreciate. So that said, I’m willing to illuminate any ambiguities or comment some more.

If you’re at all interested in the evolutionary basis of mind, this short introductory paper by Leda Cosmides and John Tooby, the two founders of evolutionary psychology, is a nice start: http://www.psych.ucsb.edu/research/cep/papers/A0529.pdf
I read it. But it makes a number of assumptions that I found hole after hole after hole in. It's reverse engineering, as the article stated, something that I'm extremely familiar with, because in IT, you have to do a lot of reverse engineering. You have to use a lot of existing programs in IT, but you don't have the source code for them, and yet, the manual doesn't explain much of what they do, that contradicts the way the manuals and help files imply. So you have to figure out how they really work, as opposed to how they are supposed to work, to get your code to work reliably and efficiently. It takes a lot of practice, and what you start out thinking, is the opposite to what you really need to do in reverse engineering. The article really does read like a layman who started out with an assumption of reverse engineeering, but never had to test those preconceptions in reality, like you have to with IT.

scorpiomover, I wanted to give you some kudos: you come across as a rather civilized and open-minded individual. It's good to see that civilzed discourse is indeed possible on the net.
Thanks. I know that I've come across as quite scathing. It's because I feel that the subject just is not approached with enough raw experience as to how such cognitive programs could work in reality, or how to back-track what we see of cognitive behaviour to what might be possible, that I feel that we are walking a long way down the wrong way in the forest. I feel that giving evolutionary psychologists the job of being a real programmer for a few years, with some really difficult programming jobs, would give them the experience to really give them the empirical experience to put their theories on a much more solid track, that would really explain their findings very, very, very clearly to them.
 abelian
Joined: 1/12/2008
Msg: 153
IQ is a garbage tool for determining intelligence
Posted: 7/19/2010 9:30:34 AM

Mathematicians and physicists are trained in entirely different ways.

However, the difference is not the difference you think it is.

All of those required reification of physical observed phenomena into mathematical symbols, and then to use mathematical theorems to deduce conclusions about them, like the Schrödinger equation.

What's your point? Those things were done by physicists.

So what you are saying is that quantum mechanics never gave us ANY conclusions about the real world? None? That's quite surprising to hear. Perhaps you can quote Richard Feynman, Niels Bohr, Murray Gell-Mann, Ernest Rutherford, or some other famous particle or quantum physicist explainin exactly that.

Improve your reading skills and try to make use of simple logic. What I said was:
[quote\(2) What's even stranger about your comment is that (a) proving the quantum mechanical formalism correct did not result in solving any mysteries apart from justifying the use of things like the Dirac delta function, which was used extensively by Dirac when the theory was developed, but only mathematically justified 30 years later in the context of distributions.

However, there is always a problem applying mathematics to physics, because while the mathematical theorems can be said to be 100% correct, the observed relationships and theories of physics cannot. So we can never truly expect to make physics totally mathematical.

I've alreadt explained the relationship between mathemtics and physics to you at least once, yet you seem to be incapable of grasping it.

So physicists simply don't have the requirement to think in nearly as much detail and level of accuracy as mathematicians have, and perceived necessity is the mother of invention.

Your notion of a heirarchy in which mathematicians are somewho dsuperior thinkers is why you have so much difficulty understanding the relationship between mathematics and physics and continually make assertions which are contridicted by historical fact.

Also, I can see that you like to use a lot of equations.

You really consider that to be ``a lot of equations,'' especially given that you are making statements claims about the very mathematics those simple equations entail?

I too, can use equations. After all, as a mathematician, it's what I use all the time. However, when speaking to others, I prefer to convert my equations into more readable matter for those who don't quite know the mathematics as well as I do.

If you don't understand something and can't figure it out by looking it up, just ask and I'll explain it in as much detail as you require, provided simple English and concepts that adults can understand with a little (but not neccessarily zero) thought is sufficient.

I simply cannot take any view of your writing on such equations, as I cannot say if it is right or wrong, and my mathematical training has taught me to never blindly accept the word of anyone about anything I cannot prove to myself, just because they sound clever due to using jargon.

And you claim to have a mathematical background? Wow. Everything I posted is mathematics that is fairly simple for a mathematician who has had an introductory course in abstract algebra, but really doesn't require nearly that much sophistication. I mean, how hard can it be to take the definitions I provided (and you can easily look up) for the classical variables E and p, plug them into the equation E = p^2/2m + V and get a result which you can also verify is the Schroedinger equation. If you can multiply two complex numbers together, you can do this:

E -> i hbar d/dt; p -> -i hbar d/dx

i hbar d/dt = -(hbar^2/2m) d^2/dx^2 + V

Now just operate on a function (called a wavefunction) and solve for the function. OR use the commutation relations for an abstract approach (cf Dirac's theory of the harmonic oscillator).
 scorpiomover
Joined: 4/19/2007
Msg: 154
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/25/2010 6:19:44 PM
RE Msg: 211 by Kardinal Offishall:
What’s up Scorpio.
Hey.

I was going to write a reply to your post. I was taking my time, thinking about your post, as I felt it was quite well written, and deserved an equally well-thought-out reply. I had reached my conclusions. But the other day, I was in Waterstones, and I saw Steven Pinker's "How the mind works", and I thought of your post. I thought that I really ought to give Steven Pinker a fair chance. So I bought it to read. So I think I'll wait for a bit, until I've read it, before replying. That way, I can honestly say that I've given your views a really fair chance.
 abelian
Joined: 1/12/2008
Msg: 155
IQ is a garbage tool for determining intelligence
Posted: 7/26/2010 9:23:15 AM
Of course Maxwell wasn't working in an intellectual vacuum. Well, a logician would say that, because without ANY evidence, he'd have had to repeat every previous significant experiment by sheer chance, and the probability that he just discovered them all by chance,

You should really read Maxwell's original paper on this. First of all, Maxwell did not end with only the 4 equations that bear his name. He ended up with 20. Second, the 4 equations that bear his name already existed and with the exception of Ampere's law, stand as they did prior to Maxwell. What Maxwell did was notice that those four equations as they stood, did not conserve electric charge and he added one term to Ampere's law to fix that problem. (That term happens to imply electromagnetic radiation). For that very astute observation, Maxwell gets the credit. However, it was conservation of electric charge that motivated the addition of that term. Maxwell's original 20 equations were more complicated than necessary because he wanted to give the vector potential physical meaning in terms of the aether (and later decided that he could not accomplish that objective with his theory). It is, in fact impossible to treat the vector and scalar potentials as anything but mathematical artifices using only Maxwell's equations.

Unfortunately, this is impossible. Even if you used every DNA base-pair for one program, you'd get about 40Mb, which would never be enough to even process language, let alone audio or visual data.

Your concept of computation is very naive. Fisrt of all, any computation that can be done on a computer may also be done using a Turing machine when supplied with the appropriate program. A Turing machine requires only 4 symbols and 7 states to do this. (Vitanyi and Li, ``An Introduction to Kolmogorov Complexity and its Applications''). A DNA molecule may be treated as a string coded in a language based on a 4-ary alphabet where the 4 symbols of the alphabet are the base amino acids. That makes it formally a computer program. Your assertion that 40 Mbits (or even MBytes) is not sufficient, is false on its face. The instructions encoded in a DNA molecule obviously produces an organism with the abilities you claim are not possible, regardless of whether that is 1 kb or a billion times what you claim. (Your claim about ``only 40 Mb'' is also naive because 40 Mb of information could easily be a hell of a lot more information than what a computer program written in, for example C, could contain.) Information content is determined by the shortest string which describes something, not what it requires to specify it in a language which may or may not be optimal or convenient for humans to use). The fact that you can take the text of a computer program and compresss it tells you immediately that the ANSI specificationn is not the shortest possible way to code the same program.

Again, not really possible. The number of possible variations in programs that are not detected except by physical testing, is billions per program. Each program would have to randomly occur in a billion separate humans before one would get it right enough for it to be a useful adaptation.

That is again, naive. Given an alphabet and a language, not all combinmations form syntacticall correct sentences, so obviously, you need only consider combinations of symbols which are syntactically correct. (This is no different than programming in a language like C. The compiler will not compile prograns which are syntatically invalid. In this case, it means mutations which give rise to a DNA molecule that contains the proper intructions to produce an organism. Mutations which aren't valid programs are called miscarriages. The mutations which give rise to adaptable organisms are more likely to survive to produce offspring with the same DNA than those organisms which don't adapt. There is a LOT of information in a DNA molecule and computer prograns written by humans are very crude and simplistic by comparison. Nature's had billions of years for debugging asnd compression.
 scorpiomover
Joined: 4/19/2007
Msg: 156
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/26/2010 3:21:35 PM
RE Msg: 210 by abelian:

Again, clearly, if I give you an inch, you take a yard. Clearly, you don't know that shows a huge level of psychological information about you, the way you think, and all your transactions. Oh, and I show myself too in my words. But I know it, and I don't mind at all.


Mathematicians and physicists are trained in entirely different ways.
However, the difference is not the difference you think it is.
If it's displayed by the way you write, then physicists would be a lot stupider than I give them credit for. Personally, having met other physicists, I'd be inclined to say that you're just the poor-quality apple.

Improve your reading skills and try to make use of simple logic.
OK.

What I said was:
[quote\(2) What's even stranger about your comment is that (a) proving the quantum mechanical formalism correct did not result in solving any mysteries apart from justifying the use of things like the Dirac delta function, which was used extensively by Dirac when the theory was developed, but only mathematically justified 30 years later in the context of distributions.
Simple logic:

Why would anyone teach you that, if it didn't teach you anything useful?

Why would it be useful, if all it did was justify the use of some weird mathematical thing, that has no relevance to reality, and doesn't teach us anything?

No-one in university would do that, not when it comes to physics, and certainly not when it comes to mathematics.

So you are talking 100% BS.

You were taught it, because it is extremely important for physicists to know. in your physics courses, because it teaches you loads of stuff that you simply would have no clue about, otherwise.

I've alreadt explained the relationship between mathemtics and physics to you at least once, yet you seem to be incapable of grasping it.
Try explaining it again. So far, it didn't make sense.

Your notion of a heirarchy in which mathematicians are somewho dsuperior thinkers is why you have so much difficulty understanding the relationship between mathematics and physics and continually make assertions which are contridicted by historical fact.
You really don't get it, do you?

Mathematics and physics are not in competition.
If you have mathematics but no physics, you understand numbers, but not their relation to physical phenomena. You can still do stuff with numbers, just not mechanics, or other parts of physics.
If you have physics but no mathematics, you have a lot of observations about physical phenomena, but no way to understand them. It doesn't help at all.

That's why the really famous physicists of history, who made the biggest impacts on the world, knew mathematics well. They knew that to be a really great physicist, to make a huge impact in physics, they needed to understand the data they got from experiments, and make huge insights, and to make huge insights from data, you need mathematics.

You really consider that to be ``a lot of equations,'' especially given that you are making statements claims about the very mathematics those simple equations entail?
No. I consider them trivial levels of mathematics. I'd write that many equations in less than a minute, doing maths, even just playing, and have done equations that are so much more complex than that, that what you wrote is nothing more than ABC. But to expect everyone on this site to understand them, would not be fair on the rest of the posters.

Heck, I wouldn't even burden my equations with you. It would be too far beyond your ken.

If you don't understand something and can't figure it out by looking it up, just ask and I'll explain it in as much detail as you require, provided simple English and concepts that adults can understand with a little (but not neccessarily zero) thought is sufficient.
Why? It's easy to observe one's ability to describe equations and the concepts behind them adequately, from the way one writes those equations in the first place, especially with the level of experience I have from seeing so many describe those equations, and then asking those people to explain them, and seeing the results. Your ways of expression have showed me that if I would ask you, I'd just get more psychobabble.

I simply cannot take any view of your writing on such equations, as I cannot say if it is right or wrong, and my mathematical training has taught me to never blindly accept the word of anyone about anything I cannot prove to myself, just because they sound clever due to using jargon.

And you claim to have a mathematical background? Wow. Everything I posted is mathematics that is fairly simple for a mathematician who has had an introductory course in abstract algebra, but really doesn't require nearly that much sophistication. I mean, how hard can it be to take the definitions I provided (and you can easily look up) for the classical variables E and p, plug them into the equation E = p^2/2m + V and get a result which you can also verify is the Schroedinger equation. If you can multiply two complex numbers together, you can do this:

E -> i hbar d/dt; p -> -i hbar d/dx

i hbar d/dt = -(hbar^2/2m) d^2/dx^2 + V

Now just operate on a function (called a wavefunction) and solve for the function. OR use the commutation relations for an abstract approach (cf Dirac's theory of the harmonic oscillator).
What the heck? What is hbar? That's not in algebra. What on earth is the operator ->? What is E? What is p?

You are making a huge level of assumptions about your level of knowledge. You didn't figure this out for yourself, or even tried to. If you did, then you'd be at least a bit clearer.

You got taught this in a course, and you are simply expecting that everyone else can do it, because you didn't realise just how much work went into working out those formulas in the first place. So you're not teaching it like a teacher. You're teaching it like someone who got taught it, and is just regurgitating it, without clearly explaining what you are actually trying to say.

RE Msg: 213 by abelian:
Of course Maxwell wasn't working in an intellectual vacuum. Well, a logician would say that, because without ANY evidence, he'd have had to repeat every previous significant experiment by sheer chance, and the probability that he just discovered them all by chance,
You should really read Maxwell's original paper on this.Maybe I will. But I've come across Einstein's original papers on relativity, and since I already have them, I'll read them first.

First of all, Maxwell did not end with only the 4 equations that bear his name. He ended up with 20.
Of course he didn't write 4 equations. He was a mathematician as well as a physicist. Mathematicians write tons of stuff. The equations that bear their name, are only a tiny handful of their actual work.

A classical example is Cauchy. There are several equations that bear Cauchy's name. However, Cauchy wrote over 70 books on mathematics. To even suggest that Cauchy's contributions to mathematics consisted only of the formulas that bear his name, is laughable.

The same is true of lots of mathematicians. That you even feel the need to point this out, means that you believe that it's a possible assumption to assume that one's work is only that which one is famous for. It's a cognitive train of thought, that is so anathema to all that I know of logic and reason, that it gives me shivers to even consider.

Second, the 4 equations that bear his name already existed and with the exception of Ampere's law, stand as they did prior to Maxwell. What Maxwell did was notice that those four equations as they stood, did not conserve electric charge and he added one term to Ampere's law to fix that problem. (That term happens to imply electromagnetic radiation). For that very astute observation, Maxwell gets the credit.
I doubt that. The way you make it out, he only made one tiny change. That alone would not be worth such credit, in mathematics, or in any other field. Certainly Stephen Hawking would not consider Maxwell such a genius, if that was all he did.

However, if he did only make that tiny change, then his credit was NOT due to making that tiny change. It was being able to see all the different ways to integrate the knowledge about electro-magnetism of his time, and all the equations and relationships, and to find a way to make one little tweak, and bring everything together.

There is a story about genius, that illustrates the point. A world-famous engineer was on his holidays, on a cruise. The captain came to him, and said that the ship had an awful racket in a part of the ship, that was really irritating some of the passengers. The engineer replied that he was on his holidays, and didn't want to work. The captain pleaded with him. Several of the irate passengers were very important people, who were threatening to sue, and could cause such problems for the company, that the company would lose millions, and the captain would lose his job, if the matter was not sorted. So the engineer agreed, on one condition, that he was to be paid $50,000. The captain agreed readily, as that was nothing compared to what might happen if the matter was not resolved quickly.

So the engineer takes his tools with him, and goes around the ship, listening here, testing there. Finally, he gets to a certain joint of a certain pipe, takes out a small hammer, and taps it once. The pipe lets out an almighty whistle, and a huge head of steam is released. Suddenly, the noise stops dead.

The captain cannot be more thankful. "What can we possibly do for you?" he says. "Pay me my $5000", replies the engineer. The captain says to the engineer, that he cannot justify paying $50,000 for one small tap of a hammer. He says that he needs some kind of proper justification to the company. So the engineer gets out a pad, and writes out a bill. The bill read:

Cost of 1 tap with a hammer: $1.00
Cost of knowing exactly where to tap: $49,999.00

The captain paid out.

However, it was conservation of electric charge that motivated the addition of that term.
Simple logic: If it was so easily solved, then surely Ampere, and every other physicist before Maxwell, would have already seen that, and added it.

Maxwell's original 20 equations were more complicated than necessary because he wanted to give the vector potential physical meaning in terms of the aether (and later decided that he could not accomplish that objective with his theory). It is, in fact impossible to treat the vector and scalar potentials as anything but mathematical artifices using only Maxwell's equations.
Before I'd say that, I'd have to read Maxwell's paper, and study it at length. I have found that people are often too quick to dismiss mathematical equations as only being able to be explained as mathematical concepts, only for me to explain it to someone who failed mathematics at high school, in 5 minutes, using a simple example.

Unfortunately, this is impossible. Even if you used every DNA base-pair for one program, you'd get about 40Mb, which would never be enough to even process language, let alone audio or visual data.
Your concept of computation is very naive.


Fisrt of all, any computation that can be done on a computer may also be done using a Turing machine when supplied with the appropriate program.
You do understand that ALL computer designs since Turing, were based on the principle of a Turing machine, don't you? Computer programs HAVE to be capable of being performed on a Turing machine by definition.

A Turing machine requires only 4 symbols and 7 states to do this. (Vitanyi and Li, ``An Introduction to Kolmogorov Complexity and its Applications'').
You are referring to the minimum size of the CPU, not the program, and certainly not the maximum compression rate for any program, considering every possible algorithm.

A DNA molecule may be treated as a string coded in a language based on a 4-ary alphabet where the 4 symbols of the alphabet are the base amino acids. That makes it formally a computer program.
Why are you saying something completely trivial?

The instructions encoded in a DNA molecule obviously produces an organism with the abilities you claim are not possible, regardless of whether that is 1 kb or a billion times what you claim.
I didn't say that DNA can't do it. I know how it can be done. I just said that it cannot be done as described.

(Your claim about ``only 40 Mb'' is also naive because 40 Mb of information could easily be a hell of a lot more information than what a computer program written in, for example C, could contain.)
Are you kidding? C is hugely inefficient. Now, if you were talking machine code, or assembler, that can be optimised far, far more than standard C compilers normally provide. That inefficiency is so poor, that a program written directly in assembler for a given CPU, which has a 1-to-1 relationship with its machine code, even simple code, that should only be be capable of a tiny amount of optimisation, runs 10 times faster than the same thing written in C.

But even with all that, to develop a computer program that could just handle English to the extent that a 10-year-old can speak, with all his nuances, idioms, slang, and "in" jokes, is still something that I doubt that any computer programmer alive could fit into 40Mb. Then you have to consider that some people speak 5 languages fluently, and some have even mastered 20. Even if I wanted to code for language in general, I'd have to include every variation, of every set of grammar, according to each combination, that exists in every language, to result in a single language program that could handle every language. It would be monstrous in size.

Information content is determined by the shortest string which describes something, not what it requires to specify it in a language which may or may not be optimal or convenient for humans to use). The fact that you can take the text of a computer program and compresss it tells you immediately that the ANSI specificationn is not the shortest possible way to code the same program.
Of course an ANSI string isn't the smallest possible combination. Anyone who has studied even a bit about compression programs and compression algorithms knows that. The problem is, what is the shortest string for a piece of text? I can easily compress text down to 10%. But programs? Almost nothing, because the programs already have compressed their data substantially. You normally get 0% compression rate on programs. So you're not proving anything, showing off that you can observe compression rates on programs on your computer.


Again, not really possible. The number of possible variations in programs that are not detected except by physical testing, is billions per program. Each program would have to randomly occur in a billion separate humans before one would get it right enough for it to be a useful adaptation.
That is again, naive. Given an alphabet and a language, not all combinmations form syntacticall correct sentences, so obviously, you need only consider combinations of symbols which are syntactically correct.
Of course you don't. But if you break any syntactically correct sentence down to its pure logical statement, you still get a hefty amount of data, unless you know how to code far more creatively than you are making out.

(This is no different than programming in a language like C. The compiler will not compile prograns which are syntatically invalid.
Great. Someone who learned C in university, and thinks they know coding. Coding isn't about knowing a language. It's about knowing how to code in 1 line, what everyone else would need 100 lines to do.

In this case, it means mutations which give rise to a DNA molecule that contains the proper intructions to produce an organism. Mutations which aren't valid programs are called miscarriages.
That's an extraordinary claim. Do you have the extraordinary evidence to back it up? A simple chemical equation that describes exactly how DNA detects syntactically incorrect programs, would suffice.

The mutations which give rise to adaptable organisms are more likely to survive to produce offspring with the same DNA than those organisms which don't adapt.
Now you're just stating Darwin's theory. Tell me something useful.

There is a LOT of information in a DNA molecule and computer prograns written by humans are very crude and simplistic by comparison. Nature's had billions of years for debugging asnd compression.
Of course DNA works, because if it didn't, we would be able to write these posts. But you seem to think you understand it, without actually working out what is and isn't possible, by both practical experience, and calculations. You also seem to think that others have missed this obvious clue, and it's your job to "enlighten" them.

Cut the sh*t. Say something meaningful, that is more than a trivial statement.RE Msg: 210 by abelian:

Again, clearly, if I give you an inch, you take a yard. Clearly, you don't know that shows a huge level of psychological information about you, the way you think, and all your transactions. Oh, and I show myself too in my words. But I know it, and I don't mind at all.


Mathematicians and physicists are trained in entirely different ways.
However, the difference is not the difference you think it is.
If it's displayed by the way you write, then physicists would be a lot stupider than I give them credit for. Personally, having met other physicists, I'd be inclined to say that you're just the poor-quality apple.

All of those required reification of physical observed phenomena into mathematical symbols, and then to use mathematical theorems to deduce conclusions about them, like the Schrödinger equation.

Improve your reading skills and try to make use of simple logic.
OK.

What I said was:
[quote\(2) What's even stranger about your comment is that (a) proving the quantum mechanical formalism correct did not result in solving any mysteries apart from justifying the use of things like the Dirac delta function, which was used extensively by Dirac when the theory was developed, but only mathematically justified 30 years later in the context of distributions.
Simple logic:

Why would anyone teach you that, if it didn't teach you anything useful?

Why would it be useful, if all it did was justify the use of some weird mathematical thing, that has no relevance to reality, and doesn't teach us anything?

No-one in university would do that, not when it comes to physics, and certainly not when it comes to mathematics.

So you are talking 100% BS.

You were taught it, because it is extremely important for physicists to know. in your physics courses, because it teaches you loads of stuff that you simply would have no clue about, otherwise.

I've alreadt explained the relationship between mathemtics and physics to you at least once, yet you seem to be incapable of grasping it.
Try explaining it again. So far, it didn't make sense.

Your notion of a heirarchy in which mathematicians are somewho dsuperior thinkers is why you have so much difficulty understanding the relationship between mathematics and physics and continually make assertions which are contridicted by historical fact.
You really don't get it, do you?

Mathematics and physics are not in competition.
If you have mathematics but no physics, you understand numbers, but not their relation to physical phenomena. You can still do stuff with numbers, just not mechanics, or other parts of physics.
If you have physics but no mathematics, you have a lot of observations about physical phenomena, but no way to understand them. It doesn't help at all.

That's why the really famous physicists of history, who made the biggest impacts on the world, knew mathematics well. They knew that to be a really great physicist, to make a huge impact in physics, they needed to understand the data they got from experiments, and make huge insights, and to make huge insights from data, you need mathematics.

You really consider that to be ``a lot of equations,'' especially given that you are making statements claims about the very mathematics those simple equations entail?
No. I consider them trivial levels of mathematics. I'd write that many equations in less than a minute, doing maths, even just playing, and have done equations that are so much more complex than that, that what you wrote is nothing more than ABC. But to expect everyone on this site to understand them, would not be fair on the rest of the posters.

Heck, I wouldn't even burden my equations with you. It would be too far beyond your ken.

If you don't understand something and can't figure it out by looking it up, just ask and I'll explain it in as much detail as you require, provided simple English and concepts that adults can understand with a little (but not neccessarily zero) thought is sufficient.
Why? It's easy to observe one's ability to describe equations and the concepts behind them adequately, from the way one writes those equations in the first place, especially with the level of experience I have from seeing so many describe those equations, and then asking those people to explain them, and seeing the results. Your ways of expression have showed me that if I would ask you, I'd just get more psychobabble.

I simply cannot take any view of your writing on such equations, as I cannot say if it is right or wrong, and my mathematical training has taught me to never blindly accept the word of anyone about anything I cannot prove to myself, just because they sound clever due to using jargon.

And you claim to have a mathematical background? Wow. Everything I posted is mathematics that is fairly simple for a mathematician who has had an introductory course in abstract algebra, but really doesn't require nearly that much sophistication. I mean, how hard can it be to take the definitions I provided (and you can easily look up) for the classical variables E and p, plug them into the equation E = p^2/2m + V and get a result which you can also verify is the Schroedinger equation. If you can multiply two complex numbers together, you can do this:

E -> i hbar d/dt; p -> -i hbar d/dx

i hbar d/dt = -(hbar^2/2m) d^2/dx^2 + V

Now just operate on a function (called a wavefunction) and solve for the function. OR use the commutation relations for an abstract approach (cf Dirac's theory of the harmonic oscillator).
What the heck? What is hbar? That's not in algebra. What on earth is the operator ->? What is E? What is p?

You are making a huge level of assumptions about your level of knowledge. You didn't figure this out for yourself, or even tried to. If you did, then you'd be at least a bit clearer.

You got taught this in a course, and you are simply expecting that everyone else can do it, because you didn't realise just how much work went into working out those formulas in the first place. So you're not teaching it like a teacher. You're teaching it like someone who got taught it, and is just regurgitating it, without clearly explaining what you are actually trying to say.

RE Msg: 213 by abelian:
Of course Maxwell wasn't working in an intellectual vacuum. Well, a logician would say that, because without ANY evidence, he'd have had to repeat every previous significant experiment by sheer chance, and the probability that he just discovered them all by chance,
You should really read Maxwell's original paper on this.
Maybe I will. But I've come across Einstein's original papers on relativity, and since I already have them, I'll read them first.

First of all, Maxwell did not end with only the 4 equations that bear his name. He ended up with 20.
Of course he didn't write 4 equations. He was a mathematician as well as a physicist. Mathematicians write tons of stuff. The equations that bear their name, are only a tiny handful of their actual work.

A classical example is Cauchy. There are several equations that bear Cauchy's name. However, Cauchy wrote over 70 books on mathematics. To even suggest that Cauchy's contributions to mathematics consisted only of the formulas that bear his name, is laughable.

The same is true of lots of mathematicians. That you even feel the need to point this out, means that you believe that it's a possible assumption to assume that one's work is only that which one is famous for. It's a cognitive train of thought, that is so anathema to all that I know of logic and reason, that it gives me shivers to even consider.

Second, the 4 equations that bear his name already existed and with the exception of Ampere's law, stand as they did prior to Maxwell. What Maxwell did was notice that those four equations as they stood, did not conserve electric charge and he added one term to Ampere's law to fix that problem. (That term happens to imply electromagnetic radiation). For that very astute observation, Maxwell gets the credit.
I doubt that. The way you make it out, he only made one tiny change. That alone would not be worth such credit, in mathematics, or in any other field. Certainly Stephen Hawking would not consider Maxwell such a genius, if that was all he did.

However, if he did only make that tiny change, then his credit was NOT due to making that tiny change. It was being able to see all the different ways to integrate the knowledge about electro-magnetism of his time, and all the equations and relationships, and to find a way to make one little tweak, and bring everything together.

There is a story about genius, that illustrates the point. A world-famous engineer was on his holidays, on a cruise. The captain came to him, and said that the ship had an awful racket in a part of the ship, that was really irritating some of the passengers. The engineer replied that he was on his holidays, and didn't want to work. The captain pleaded with him. Several of the irate passengers were very important people, who were threatening to sue, and could cause such problems for the company, that the company would lose millions, and the captain would lose his job, if the matter was not sorted. So the engineer agreed, on one condition, that he was to be paid $50,000. The captain agreed readily, as that was nothing compared to what might happen if the matter was not resolved quickly.

So the engineer takes his tools with him, and goes around the ship, listening here, testing there. Finally, he gets to a certain joint of a certain pipe, takes out a small hammer, and taps it once. The pipe lets out an almighty whistle, and a huge head of steam is released. Suddenly, the noise stops dead.

The captain cannot be more thankful. "What can we possibly do for you?" he says. "Pay me my $5000", replies the engineer. The captain says to the engineer, that he cannot justify paying $50,000 for one small tap of a hammer. He says that he needs some kind of proper justification to the company. So the engineer gets out a pad, and writes out a bill. The bill read:

Cost of 1 tap with a hammer: $1.00
Cost of knowing exactly where to tap: $49,999.00

The captain paid out.

However, it was conservation of electric charge that motivated the addition of that term.
Simple logic: If it was so easily solved, then surely Ampere, and every other physicist before Maxwell, would have already seen that, and added it.

Maxwell's original 20 equations were more complicated than necessary because he wanted to give the vector potential physical meaning in terms of the aether (and later decided that he could not accomplish that objective with his theory). It is, in fact impossible to treat the vector and scalar potentials as anything but mathematical artifices using only Maxwell's equations.
Before I'd say that, I'd have to read Maxwell's paper, and study it at length. I have found that people are often too quick to dismiss mathematical equations as only being able to be explained as mathematical concepts, only for me to explain it to someone who failed mathematics at high school, in 5 minutes, using a simple example.

Unfortunately, this is impossible. Even if you used every DNA base-pair for one program, you'd get about 40Mb, which would never be enough to even process language, let alone audio or visual data.
Your concept of computation is very naive.


Fisrt of all, any computation that can be done on a computer may also be done using a Turing machine when supplied with the appropriate program.
You do understand that ALL computer designs since Turing, were based on the principle of a Turing machine, don't you? Computer programs HAVE to be capable of being performed on a Turing machine by definition.

A Turing machine requires only 4 symbols and 7 states to do this. (Vitanyi and Li, ``An Introduction to Kolmogorov Complexity and its Applications'').
You are referring to the minimum size of the CPU, not the program, and certainly not the maximum compression rate for any program, considering every possible algorithm.

A DNA molecule may be treated as a string coded in a language based on a 4-ary alphabet where the 4 symbols of the alphabet are the base amino acids. That makes it formally a computer program.
Why are you saying something completely trivial?

The instructions encoded in a DNA molecule obviously produces an organism with the abilities you claim are not possible, regardless of whether that is 1 kb or a billion times what you claim.
I didn't say that DNA can't do it. I know how it can be done. I just said that it cannot be done as described.

(Your claim about ``only 40 Mb'' is also naive because 40 Mb of information could easily be a hell of a lot more information than what a computer program written in, for example C, could contain.)
Are you kidding? C is hugely inefficient. Now, if you were talking machine code, or assembler, that can be optimised far, far more than standard C compilers normally provide. That inefficiency is so poor, that a program written directly in assembler for a given CPU, which has a 1-to-1 relationship with its machine code, even simple code, that should only be be capable of a tiny amount of optimisation, runs 10 times faster than the same thing written in C.

But even with all that, to develop a computer program that could just handle English to the extent that a 10-year-old can speak, with all his nuances, idioms, slang, and "in" jokes, is still something that I doubt that any computer programmer alive could fit into 40Mb. Then you have to consider that some people speak 5 languages fluently, and some have even mastered 20. Even if I wanted to code for language in general, I'd have to include every variation, of every set of grammar, according to each combination, that exists in every language, to result in a single language program that could handle every language. It would be monstrous in size.

Information content is determined by the shortest string which describes something, not what it requires to specify it in a language which may or may not be optimal or convenient for humans to use). The fact that you can take the text of a computer program and compresss it tells you immediately that the ANSI specificationn is not the shortest possible way to code the same program.
Of course an ANSI string isn't the smallest possible combination. Anyone who has studied even a bit about compression programs and compression algorithms knows that. The problem is, what is the shortest string for a piece of text? I can easily compress text down to 10%. But programs? Almost nothing, because the programs already have compressed their data substantially. You normally get 0% compression rate on programs. So you're not proving anything, showing off that you can observe compression rates on programs on your computer.


Again, not really possible. The number of possible variations in programs that are not detected except by physical testing, is billions per program. Each program would have to randomly occur in a billion separate humans before one would get it right enough for it to be a useful adaptation.
That is again, naive. Given an alphabet and a language, not all combinmations form syntacticall correct sentences, so obviously, you need only consider combinations of symbols which are syntactically correct.
Of course you don't. But if you break any syntactically correct sentence down to its pure logical statement, you still get a hefty amount of data, unless you know how to code far more creatively than you are making out.

(This is no different than programming in a language like C. The compiler will not compile prograns which are syntatically invalid.
Great. Someone who learned C in university, and thinks they know coding. Coding isn't about knowing a language. It's about knowing how to code in 1 line, what everyone else would need 100 lines to do.

In this case, it means mutations which give rise to a DNA molecule that contains the proper intructions to produce an organism. Mutations which aren't valid programs are called miscarriages.
That's an extraordinary claim. Do you have the extraordinary evidence to back it up? A simple chemical equation that describes exactly how DNA detects syntactically incorrect programs, would suffice.

The mutations which give rise to adaptable organisms are more likely to survive to produce offspring with the same DNA than those organisms which don't adapt.
Now you're just stating Darwin's theory. Tell me something useful.

There is a LOT of information in a DNA molecule and computer prograns written by humans are very crude and simplistic by comparison. Nature's had billions of years for debugging asnd compression.
Of course DNA works, because if it didn't, we would be able to write these posts. But you seem to think you understand it, without actually working out what is and isn't possible, by both practical experience, and calculations. You also seem to think that others have missed this obvious clue, and it's your job to "enlighten" them.

Cut the sh*t. Say something meaningful, that is more than a trivial statement.
 abelian
Joined: 1/12/2008
Msg: 157
IQ is a garbage tool for determining intelligence
Posted: 7/26/2010 9:06:01 PM
Other self defensive drivel *snipped*

What the heck? What is hbar? That's not in algebra. What on earth is the operator ->? What is E? What is p?

If you don't know what those mean, you have no basis for making the claims about physics that you seem compelled to spout off.

I doubt that. The way you make it out, he only made one tiny change. That alone would not be worth such credit, in mathematics, or in any other field. Certainly Stephen Hawking would not consider Maxwell such a genius, if that was all he did.

Well, that is what he did and it was a profound insight regardless of what you think.

Simple logic: If it was so easily solved, then surely Ampere, and every other physicist before Maxwell, would have already seen that, and added it.

What you call simple logic doesn't trump historical fact, no matter how much you kick and scream.

You attempted to attribute the quote below to me:

Unfortunately, this is impossible. Even if you used every DNA base-pair for one program, you'd get about 40Mb, which would never be enough to even process language, let alone audio or visual data.

And you used my reply saying that your concept of compiting is naive to make it appear that it was you who said that to me. You are very dishonest.

You do understand that ALL computer designs since Turing, were based on the principle of a Turing machine, don't you? Computer programs HAVE to be capable of being performed on a Turing machine by definition.

Even though what you wrote is just more posturing and isn't even relevant to what I pointed out, I'll answer it since you're wrong. Analog computers were nothing more than amplifiers with feedback loops configured to differentiate or integrate input to solve a differential equation. They certainly weren't Turing machines.


I didn't say that DNA can't do it. I know how it can be done. I just said that it cannot be done as described.

As a matter of fact, you did say that and you tried to attribute what you said to me as noted above. Your comments and the context in which you provided them have been reproduced below for your benefit:




Kardinal Offshell wrote:
As for cognitive adaptations, an easy way to understand what they are is to in a sense think of them in terms of software programs. This notion is central to evolutionary psychology, which posits the existence of a multitude of such innate (in-born) programs, which in turn possess a genetic basis.

You replied:
Unfortunately, this is impossible. Even if you used every DNA base-pair for one program, you'd get about 40Mb, which would never be enough to even process language, let alone audio or visual data.

So obviously you said exactly what you claimed you did not say (unless you think ``genetic basis'' has nothing to do with genes and DNA, which would be weird to say the least).

Why are you saying something completely trivial?

If you really found what I said to be trivial, you would not have considered this to be an extraordinary claim:


Me:
In this case, it means mutations which give rise to a DNA molecule that contains the proper intructions to produce an organism. Mutations which aren't valid programs are called miscarriages.

You:
That's an extraordinary claim. Do you have the extraordinary evidence to back it up? A simple chemical equation that describes exactly how DNA detects syntactically incorrect programs, would suffice.

If you consider what I said about the DNA molecule is trivial then what exactly is extraordinary about that?

If you're going to try and bs your way through by posturing, at least try to be logically consistent.
 scorpiomover
Joined: 4/19/2007
Msg: 158
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/27/2010 6:41:53 AM
RE Msg: 215 by abelian:
Other self defensive drivel *snipped*
Now, now. Just because you're not up to the argument, no need to try to use the ad hominem without any proof. I know it's you favourite type of defence, when you have no way to prove you're wrong.


What the heck? What is hbar? That's not in algebra. What on earth is the operator ->? What is E? What is p?
If you don't know what those mean, you have no basis for making the claims about physics that you seem compelled to spout off.
Of course I know what E and p are. hbar took me longer, because I've never seen it written like that, only as the symbol. -> I can guess at, but I'm loathe to, as the standard usage would be =, and even you would know that.

But I've never seen anyone write like that, certainly not in university. Even when my professors used the Kronecker delta, they still wrote it out again, just to be clear. I'm just used to lecturers being clear about what they write, not just writing out a few lines and expecting everyone else to know it. Mind you, it worked for them to teach Hahn-Banach spaces, uniform convergence, and eigenvalues and eigenvectors, and to teach us how to work them out quite easily.


I doubt that. The way you make it out, he only made one tiny change. That alone would not be worth such credit, in mathematics, or in any other field. Certainly Stephen Hawking would not consider Maxwell such a genius, if that was all he did.
Well, that is what he did and it was a profound insight regardless of what you think.
I do think his point was a profound insight. I wrote that clearly. I even gave a little story to illustrate it. It's YOU that wrote:
Second, the 4 equations that bear his name already existed and with the exception of Ampere's law, stand as they did prior to Maxwell. What Maxwell did was notice that those four equations as they stood, did not conserve electric charge and he added one term to Ampere's law to fix that problem. (That term happens to imply electromagnetic radiation). For that very astute observation, Maxwell gets the credit. However, it was conservation of electric charge that motivated the addition of that term.
Your statement implied that Maxwell achieved very little. I content the opposite, that what he achieved was monumental, because he understood the formulae well enough to realise they could be simplified, where others could not see any possibility of doing so.


Simple logic: If it was so easily solved, then surely Ampere, and every other physicist before Maxwell, would have already seen that, and added it.
What you call simple logic doesn't trump historical fact,
Historical fact says it was done. But not if it was easy. It's your claim that it was easy. It's my claim that it was done, as that is historical fact, but that if it was easy, then others would have already done it. So my conclusion is that it was done, but that it wasn't easy.

no matter how much you kick and scream.
Who is kicking and screaming? I made it clear in my post that I knew it was done. I am only disagreeing with your interpretation of the events, not what happened. Just your version of them.


You attempted to attribute the quote below to me:
Unfortunately, this is impossible. Even if you used every DNA base-pair for one program, you'd get about 40Mb, which would never be enough to even process language, let alone audio or visual data.
Typo. I was going to respond to your response to MY statement. But I didn't quite realise that I hadn't, and I hadn't double-quoted it either. It was an error of typing. I do make errors occasionally.

And you used my reply saying that your concept of compiting is naive to make it appear that it was you who said that to me. You are very dishonest.
As I said, it was a typo. If you wish to call me dishonest for making the occasional typo, then do so. Lots of people make typos, spelling mistakes, etc. I don't hold them to it. That would be pernickety over trifles.


You do understand that ALL computer designs since Turing, were based on the principle of a Turing machine, don't you? Computer programs HAVE to be capable of being performed on a Turing machine by definition.
Even though what you wrote is just more posturing and isn't even relevant to what I pointed out, I'll answer it since you're wrong. Analog computers were nothing more than amplifiers with feedback loops configured to differentiate or integrate input to solve a differential equation. They certainly weren't Turing machines.
I guess you could say that if you were being pernickety, then you could say that all analogue computers aren't strictly conforming to Turing machine specifications, and there were one or two analogue computers still in existence in the 60s, after Turing died.

However, coming from a mathematical background, my response was that "Of course they were Turing machines. They were just using analog signals instead of digital registers, and the data was held in wires, etc. in analogue form, rather than in a traditional central set of registers. Granted, it requires a slightly more open-minded view of Turing's original. But I doubt that Turing would have had a problem with that, considering that he was trying to come up with an all-purpose method of working out results, rather than being very rigidly restrictive, and that it's quite common in mathematics to come up with a field, and then extend it to a more general form."

My response was as such, because in mathematics, the form doesn't matter, only if the properties fit the axioms, enough that the theorem holds. The properties of analogue computers conform to the basis of a Turing machine, enough that you can use Turing's description and logic to program an analogue computer almost the same way as a digital one, and any analogue computer that could not be programmed similarly to a digital computer, would be so far removed from any basis of a Turing machine, that its properties would no longer conform to what we would call an analogue computer.

He did an honours degree at mathematics at Cambridge. So I think he probably would have taken a similar approach.


I didn't say that DNA can't do it. I know how it can be done. I just said that it cannot be done as described.
As a matter of fact, you did say that and you tried to attribute what you said to me as noted above. Your comments and the context in which you provided them have been reproduced below for your benefit:

Kardinal Offshell wrote:
As for cognitive adaptations, an easy way to understand what they are is to in a sense think of them in terms of software programs. This notion is central to evolutionary psychology, which posits the existence of a multitude of such innate (in-born) programs, which in turn possess a genetic basis.
You replied:
Unfortunately, this is impossible. Even if you used every DNA base-pair for one program, you'd get about 40Mb, which would never be enough to even process language, let alone audio or visual data.
So obviously you said exactly what you claimed you did not say
You can't program DNA using the description of DNA containing cognitive programs. It wouldn't work. But obviously, DNA results in a brain that is capable of thought, that can be programmed, such as arithmetic, and algebra. So obviously, there is a connection. I just said that it couldn't be what one would call a "cognitive program".

You seem to take my words literally, but never think into them, not literally, and not non-literally. Even when I've explained things to 5-year-olds, they've asked better questions than this. I am wondering if you just look for views that support superiority bias.



A DNA molecule may be treated as a string coded in a language based on a 4-ary alphabet where the 4 symbols of the alphabet are the base amino acids. That makes it formally a computer program.
Why are you saying something completely trivial?
If you really found what I said to be trivial, you would not have considered this to be an extraordinary claim:

Me:
In this case, it means mutations which give rise to a DNA molecule that contains the proper intructions to produce an organism. Mutations which aren't valid programs are called miscarriages.
You:
That's an extraordinary claim. Do you have the extraordinary evidence to back it up? A simple chemical equation that describes exactly how DNA detects syntactically incorrect programs, would suffice.
I found your description of DNA as a form of program trivial. I found your claim that mutations that give rise to invalid programs are automatically miscarried, as quite an extraordinary claim, that lacks proof.

If you consider what I said about the DNA molecule is trivial then what exactly is extraordinary about that?
That the gamete or the host mother, has the ability to distinguish DNA with syntactically incorrect sentences, and reject them. Even amongst those which have a lot of syntactically correct sentences, some can be incorrect.

If the foetal DNA had a minority of syntactically incorrect sentences, then most of the genes work, but a few don't, such as in quite a few genetic diseases. If the host mother or the foetus had such a detection system, and the detection acted to abort foetuses with a minority of such syntactically incorrect genes, then all foetuses with such a minority of deficient genes would be miscarried. So no such babies would be born. But they are. We know that humans are born, living, with such mistakes in a small part of their DNA. So such a detection system for a minority of genes contradicts what we know.

If the foetal DNA had a majority of syntactically incorrect sentences, then it would not be able to follow the procedure for making those proteins for those sentences, as they would never complete. So the majority of such proteins would never form, and the foetus would not live very long, if it got started at all. It would be like a rock in the mother's womb. Over a short time, the mother's womb would naturally become aware the chemical traffic was not flowing correctly, and that would kick off other hormonal markers that would be activated by a lack of flow of certain important hormones required for foetal growth. Then those hormones, if they built up enough, would indicate the foetus was dead, and the mother's womb would then eject the foetal cells, just like it would for a normal egg that was never fertilised.

So if the detection system worked only if the majority of genes didn't work, then the foetus would be dead anyway. So the detection system would detect if the foetus was alive, not if the genes were syntactically correct or not.

Either way, there is no way for such a detection system to exist in the mother, not for miscarriages, not directly anyway.

There is always the possibility that there is a detection system which scans for certain standard patterns of error, and then causes those foetuses to self-destruct, or the host mother either disengages the food supply, and lets the foetus starve, or attacks it with her own cells. Then once the foetus has changed status to a non-viable construct, then it gets the same category as a foreign body, and is expelled, just like any other foreign body.

But then, it's not looking for syntactically correct sentences at all, and clearly, it lets plenty through, as is the case with many genetic diseases.

If you're going to try and bs your way through by posturing, at least try to be logically consistent.
I prefer not to posture. It's just not me. Besides, I'm not as good at the social skills like you. It's why I didn't stick with academia. Too much office politics, which I was lousy at.

I prefer to consider things using logic.
 abelian
Joined: 1/12/2008
Msg: 159
IQ is a garbage tool for determining intelligence
Posted: 7/27/2010 8:47:30 AM
Of course I know what E and p are. hbar took me longer, because I've never seen it written like that, only as the symbol.

Hmmm. What you said was:

What the heck? What is hbar? That's not in algebra. What on earth is the operator ->? What is E? What is p?

Since you asked what E and p are, either you didn't know or you're just trying to digress on a tangent to avoid the point I was illustrating.

-> I can guess at, but I'm loathe to, as the standard usage would be =, and even you would know that
.
How about starting with the standard meaning used by mathematicians, i.e., ``maps to'' and employing a little logical reasoning like. ``Let's see, the classical variable E maps to the operator, i hbar d/dt in quantum theory.'' I even gave an example, but you were too busy tryying to tell me that you don't want to believe what I write (although I also noted that you could easily look any of this up if you didn't believe me). Don;t ask me what something means in one post and then tell me ``Of course I know what _____ means in the next.'' Doing that gives me the idea that you're just backpeddling with lots of digressions designed to make it look like you know what you are talking about after you've shown you don't.

Your statement implied that Maxwell achieved very little.

You are again inventing bullsh!t just to try to contradict me. All you had to do was read what I wrote in the quote you included from my post and you'll notice that I said ``For that very astute observation, Maxwell gets the credit.'' What you want to think that implied is based on your personal agenda, not on anything I wrote.

Historical fact says it was done. But not if it was easy.

You said it was easy, not me. Here's what you said:

Simple logic: If it was so easily solved, then surely Ampere, and every other physicist before Maxwell, would have already seen that, and added it.

Don;t try to twist what I say and then try to in your twisted interpretation on me because I'm going to quote you every time you backpeddle and contradict yourself.

I guess you could say that if you were being pernickety,

You brought it up (for no reason other than to try to find something to digress and argue with) and you capitalized ``ALL'' in ``ALL computers.'' Stop includuing extrameous bs for the sake of posturing and you won't keep putting your foot in your mouth.


You can't program DNA using the description of DNA containing cognitive programs. It wouldn't work. But obviously, DNA results in a brain that is capable of thought, that can be programmed, such as arithmetic, and algebra. So obviously, there is a connection. I just said that it couldn't be what one would call a "cognitive program".

Kardinal Offshall gave you an example of just such a cognitive program:

By your quaint reasoning, complex life-forms should be impossible to evolve. Yet (magically) here we are. Gazelles were programmed by evolution to avoid Cheetahs. They’re still around, right?

There's an example of the cognitive program based on genetics you claim doesn't exist.

That the gamete or the host mother, has the ability to distinguish DNA with syntactically incorrect sentences,

The syntactically incorrect DNA is self evident in the fact that the cells in the embryo fail to keep dividing to the point of producing a living organism, just like syntactically incorrect computer programs fail to compile into a program that executes. Either you really are stumped by the possibility of expanding your concept of a program to be completely general in the way that most scientists do (especially those who know something about information theory), or you're being contrary for the sake of being contrary. I've attempted to give you the benefit of the doubt by assuming you're being contrary for the sake of being contrary rather than assume the less flattering alternative.

So if the detection system worked only if the majority of genes didn't work, then the foetus would be dead anyway.

Well, that was exactly the point, so I'm not sure why you spent so many paragraphs arguing against it.

So the detection system would detect if the foetus was alive, not if the genes were syntactically correct or not.

That isn't even relevant, but since you seem to think it is, the answer to your objection is why is detecting a dead embryo not equivalent to detecting syntactically correct DNA?

But then, it's not looking for syntactically correct sentences at all, and clearly, it lets plenty through, as is the case with many genetic diseases.

The obvious analogy with the software world to genetic abnormalities atre programs which are syntactivally corrcect and therefore compile and run, but which contain logical errors that result in the program behaving differently than expected. Just because a program runs doesn't mean it doesn't contain bugs. It just means the bugs are due to errors which are nevertheless, syntactically correct statements.

I prefer to consider things using logic.

Then what is keeping your from demonstrating that preference in your posts?
 scorpiomover
Joined: 4/19/2007
Msg: 160
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/27/2010 3:44:18 PM
RE Msg: 217 by abelian:

Of course I know what E and p are. hbar took me longer, because I've never seen it written like that, only as the symbol.
Hmmm. What you said was:
What the heck? What is hbar? That's not in algebra. What on earth is the operator ->? What is E? What is p?
Since you asked what E and p are, either you didn't know or you're just trying to digress on a tangent to avoid the point I was illustrating.
I was pointing out, how in any mathematical theorem, you'd be required to state them, for clarity, and you didn't.


-> I can guess at, but I'm loathe to, as the standard usage would be =, and even you would know that
How about starting with the standard meaning used by mathematicians, i.e., ``maps to''
Not applicable, unless you are talking about a function, as -> is used not to define the value returned by the function, but to indicate domain and co-domain. For instance, in mathematics, one would write "f is a function f:Z->Z s.t. f(x)=x^2."

and employing a little logical reasoning like. ``Let's see, the classical variable E maps to the operator, i hbar d/dt in quantum theory.''
It wouldn't make sense in mathematics. If you wrote E= i.hbar.dp/dt, that would make sense. It's been a while since I've dealt with multi-dimensional functional operators. I think that could be what you could be alluding to. But they need a clear definition, not a general word description such as "maps to the operator". It assumes too much that is unclear, to work with in mathematics, without generating ambiguity and errors.

FYI, in mathematics, one uses multiplication symbols to indicate the multiplication operator, or an equivalent operator, such as a dot(.) or a cross (x), or even an asterisk(*).

If it's clear that multiplication or an equivalent operator is implied, then one writes the 2 symbols side-by-side, with no spaces, for instance, one writes 2b, or ab, in algebra over the real numbers, instead of 2.b, or a.b. One can do the same in group theory, such as writing ab, instead of a*b, where * is the operator on the group. But one can only use implied operators, where the nature of the implied operator is clear.

Spaces indicate a separation, such as "2 x 3 = 6". But it breaks up the equation, and so the implied operator cannot be used, and must be formally stated. So in mathematics, one doesn't write "i hbar". That just means that something is missing from the equation, and the equation is syntactically incorrect, even as slang.

I even gave an example, but you were too busy tryying to tell me that you don't want to believe what I write (although I also noted that you could easily look any of this up if you didn't believe me).
I did. I had no idea what you were talking about, when you used delta_ij. It made no sense. Finally I looked up "first quantisation", found the formulas you seemed to be referring to, and saw the Kronecker Delta symbol. So then I knew that what you meant by delta_ij, was, as my professors would write:

"delta_ij is the Kronecker delta, where delta_ij = { 1 if i=j and 0 if i<>j}"

It's only one line, I know. But it turns gibberish into logic, in mathematical theorems.

Don;t ask me what something means in one post and then tell me ``Of course I know what _____ means in the next.'' Doing that gives me the idea that you're just backpeddling with lots of digressions designed to make it look like you know what you are talking about after you've shown you don't.
Fair enough. I was trying to get you to be clear. It's a standard way of clarifying mathematics between mathematicians. But you seem to not want to be clear, not even when prompted. However, even laymen are normally willing to be clear. IME, the only people who refuse, don't like to be that clear, because clarity is not in their aim, as it achieves transparency. However, those same people argued things that were quite simple to understand, but showed their claims were wrong. It was their aim to confuse, to promote the idea that they knew more than others. I used to give people the benefit of the doubt in the past, to such a ridiculous extent, that I got burned again and again. But now I know a little better.

However, I still like to give people the benefit of the doubt, which is why I asked you the questions, rather than assuming you were just parroting off what you had been taught.


Your statement implied that Maxwell achieved very little.
You are again inventing bullsh!t just to try to contradict me. All you had to do was read what I wrote in the quote you included from my post and you'll notice that I said ``For that very astute observation, Maxwell gets the credit.''
I read what you wrote. That's why I quoted it. Had you written that, and not made out like it was obvious from the law of conservation of charge, or had you said that the law of conservation of charge was a hypothesis, that Maxwell proved, or anything that implied his observation wasn't obvious, then I would have said that your statement was that Maxwell made an astounding conclusion. But in light of your comments about how it was already a result of the laws known before Maxwell, and that it was a result of the conservation of charge anyway, then it reads as if you were sarcastic about Maxwell. If you wish to be clear, then do so. But don't expect to be unclear, and expect that everyone is going to read your mind.

What you want to think that implied is based on your personal agenda, not on anything I wrote.
As I wrote, I simply read your post, and deduced my conclusions. I'm more than sure that you have the greatest respect for many Physicists. So I was a little dismayed that you wrote this, as it did contradict my earlier belief that you at least respected the greats in Physics. You went against my cognitive bias.

But I understand your need to claim that I have a personal agenda. It's a common tactic for those who I have known, who tried to dazzle with jargon that meant little.

Actually, your entire behaviour is consistent with a personality type I'm very familiar with. It's rather disconcerting, because those people are known for arguing as if they know everything, but that behind their back, the people who know them say they are idiots.


Historical fact says it was done. But not if it was easy.
You said it was easy, not me. Here's what you said:
Simple logic: If it was so easily solved, then surely Ampere, and every other physicist before Maxwell, would have already seen that, and added it.
My logic stands. If it was so easily observed from the law of conservation of charge, then surely Ampere, and every other physicist before Maxwell, would have already seen that, and added it.

It was you who wrote:
However, it was conservation of electric charge that motivated the addition of that term.
Did you mean that conservation of charge is a phenomenon that was never assumed until recently, or did you mean that the conservation of charge was taken for granted, and so Maxwell's motivations existed equally in everyone? If everyone had the same motivation, to add the same term, then why didn't everyone add it? It suggested to me that you are trying to downplay Maxwell's achievement.

Don;t try to twist what I say and then try to in your twisted interpretation on me because I'm going to quote you every time you backpeddle and contradict yourself.
Go ahead. I'm not being challenged yet. Frankly, I'm rather bored with this, and I like watching TV, so being that banal is really quite an achievement.


I guess you could say that if you were being pernickety,
You brought it up (for no reason other than to try to find something to digress and argue with) and you capitalized ``ALL'' in ``ALL computers.'' Stop includuing extrameous bs for the sake of posturing and you won't keep putting your foot in your mouth.
If you want to point out every possible configuration in my statements, that might not exact, fair enough. But people have played that game before with me, since my early teens. It's a game that I've been forced to play, and I have much experience with it.

For instance, you made several spelling mistakes. If I wanted, I could have picked you up on it. I didn't even need to think to do that. But I chose not to, as that would be beneath a man of reason, and a man of science.

However, if you really keep playing this game, I may be forced to psychoanalyse your behaviour. I've been forced into such games before, and the result of such psychoanalysis was that a poster you know very well and who you would probably agree with in many ways, decided to leave the site. I only knew that his leaving was the result of my words, because his final message was to state just that.

It's not the only time that others have left such messages and left the site. It's the 3rd or 4th time this has happened. So if we continue in such a fashion, there is a good chance that you will too.

I did not want them to leave the site. I do not want to see you leave this site either. But I wish to let you know what could happen, based on prior experience and observation.

I also wish to let you know, that if I was out of my depth, and clearly wrong, then you wouldn't care to take up such games for my being wrong. It is likely that you have chosen to be so careful to find the slightest fault in my statements, because it is your best defence. However, since your best defence would be point out that I was wrong, or at least very far from the mark, then it is likely that I hit a nerve, and got close to your inner truth, and my statements were on target enough, that you felt that you could not defend against them, without denying part of your self-image, and that is something that one's subconscious will never let one do.

As a result, without trying, I hit very close to your inner self, and saw flaws in them, flaws that you felt were so large, that you could not bear to address them directly, and were forced to try to find even the slightest fault with my words, to distract me from them. But in truth, I doubt you care what I think. So I do not believe that you were trying to distract me from these uncomfortable notions, but from yourself.


You can't program DNA using the description of DNA containing cognitive programs. It wouldn't work. But obviously, DNA results in a brain that is capable of thought, that can be programmed, such as arithmetic, and algebra. So obviously, there is a connection. I just said that it couldn't be what one would call a "cognitive program".
Kardinal Offshall gave you an example of just such a cognitive program:
By your quaint reasoning, complex life-forms should be impossible to evolve. Yet (magically) here we are. Gazelles were programmed by evolution to avoid Cheetahs. They’re still around, right?
There's an example of the cognitive program based on genetics you claim doesn't exist.
I again stipulate, as I did before, that I always maintained the mind has a way to develop, as that is obvious, or it wouldn't exist. However, the hypothesis that Kardinal Offishall used to explain such behaviour, is wanting, and an alternative hypothesis could exist, that does explain such behaviour clearly.

I do not disagree with the evidence, or ever did. I simply disagree with the hypothesis given to explain it.


That the gamete or the host mother, has the ability to distinguish DNA with syntactically incorrect sentences,
The syntactically incorrect DNA is self evident in the fact that the cells in the embryo fail to keep dividing to the point of producing a living organism, just like syntactically incorrect computer programs fail to compile into a program that executes.
That's totally different. One is replication. The other is converting source code into machine code, or P-code, or bytecode, or another type of executable code that is generated by a compiler. However, if you were to give an example of a program whose initial instance cannot replicate itself due to incorrect syntax, then we'd have a comparable example.

Either you really are stumped by the possibility of expanding your concept of a program to be completely general in the way that most scientists do (especially those who know something about information theory), or you're being contrary for the sake of being contrary.
One of the difficulties of writing programs in the real world, is that you cannot afford to have them fail, and then call technical support, excepting for certain errors that non-academic practical users would not see as a problem. Otherwise, you stop getting paid for your work. So when programming for the real world, one has to consider every possibility, including everything that those who have studied information theory have stated is quite impossible to ever happen. You have to consider the possible, the impossible, what you've never thought of, and even what no-one has ever thought of. You cannot afford to exclude anything, however remote the chance of it occurring.

I've attempted to give you the benefit of the doubt by assuming you're being contrary for the sake of being contrary rather than assume the less flattering alternative.
And yet, you seem to me to have never considered that I might have actually considered the matter at length, have already written a post that addressed each point of Kardinal Offishall, and yet still chose to buy Stephen Pinker's book, and read it, just to give the benefit of the doubt to Kardinal Offishall, even when there should be no need.


So if the detection system worked only if the majority of genes didn't work, then the foetus would be dead anyway.
Well, that was exactly the point, so I'm not sure why you spent so many paragraphs arguing against it.
Because a dead foetus, does not guarantee the DNA is in any way corrupted, or even indicates it is the most likely cause of death.


So the detection system would detect if the foetus was alive, not if the genes were syntactically correct or not.
That isn't even relevant, but since you seem to think it is, the answer to your objection is why is detecting a dead embryo not equivalent to detecting syntactically correct DNA?
It is the difference between detecting a computer's power system has blown, and the computer cannot start, and that the computer started, but the BIOS is corrupt.

One of the things you learn about in supporting a computer system, is that if a computer is broken, and it won't start, then odds on, the thing isn't plugged in, or the fuse blew, or maybe even the power supply blew, if there was a short in the electrical supply. But odds on, when that happens, once you switch the computer on, or replace the fuse, or replace the power supply, and re-start the computer, the code is fine, in the BIOS, the hard drive, and all the other devices.

So you never, ever, treat the death of a cognitive machine, inorganic or otherwise, to mean the code is corrupted, never, ever, ever, not until you've got the machine alive again, and you can see clearly, the code is coming up with errors that can only be explained by a corruption of the code.

Organic cells operate only under a very specific range of conditions, which include a very small range of temperature, pH, water-salt balance, and a host of other very subtle chemical balances. Any one of those that changed outside of operating conditions and killed the cell. It could have the equivalent of a dud power supply, in that the DNA could be fine, but some component, like the mitochondria, could have copied incorrectly. That dud mitochondria could have a fault that allowed it to operate for a very short while, but then die. Any one of a number of things could kill the cell, even though the DNA itself was never corrupted.

So in a substantial number of cases, the cell may be dead and the DNA may be syntactically correct.


But then, it's not looking for syntactically correct sentences at all, and clearly, it lets plenty through, as is the case with many genetic diseases.
The obvious analogy with the software world to genetic abnormalities atre programs which are syntactivally corrcect and therefore compile and run, but which contain logical errors that result in the program behaving differently than expected.
A compiler doesn't have to check every line of code for syntax correction. Compilers do today. But interpreters never used to, and some still don't. They have the ability to process lines of code that do work, and then come across code that doesn't, and throw an error, but that if the error is handled properly, then the code can continue.

DNA is existing code. For it to compile, it would need to be converted into another form entirely. But it remains as DNA. So for all intents and purposes, if the DNA is source code, then it stays as source code, and any process that uses the DNA, such as building a specific protein based on a gene sequence, would serve as an interpreter, not a compiler.

If the code is replicated, then it is copied, and it can be copied identically, without compilation.

If I understand you right, then you seem to me to be suggesting that DNA provides a way to go through and check the sequence of each base pair, to see that it is in line with the other base pairs in its immediate sequence, and is in line with the whole gene, for every gene in the sequence. For all I know, there might be such a process. But so far, I have not seen anyone say this chemical checking mechanism exists, not in mitosis, or any process, other than the standard use of DNA in the cell, which only uses a small part of the DNA at any one time, base pair by base pair, much like an interpreter.

But if you do find clear evidence of such a biological process, that clearly achieves such an objective, and so is akin to compile-time checking of the code, by all means, please, post the link here, for me to look up and read for myself.

But until that happens, so far, I have not come across any evidence that such a specific task exists in the body, or even that the body works in a way that such a specific task has to exist. Rather, the information I have come across, suggests to me that the reverse is true.

Just because a program runs doesn't mean it doesn't contain bugs. It just means the bugs are due to errors which are nevertheless, syntactically correct statements.
A "bug" is when the code works, but not as intended. It's called a "bug", because it bugs you, because there doesn't seem to be anything wrong with the code. An equivalent in a cell is when a gene codes correctly for a protein, but codes for a different protein than you intended for, or doesn't produce the protein when requested, or produces the protein when not requested. A genetic bug can cause cancer, or an allergic reaction, or an aneurysm.

However, if the gene has a sequence that stops halfway through due to a corruption in the sequence of base pairs, such as a codon in the wrong place, that doesn't make sense in the sequence, then the program has a syntactic error. However, because the program of the DNA is broken into separate genes, a gene can have a syntactic error, and the rest of the DNA can work fine, so long as that gene is not vital for the cell division or growth of the foetus. A foetus can even grow to term, and still be fine, if the gene only is responsible for producing a protein involved in producing sweat for the sweat glands. It just means the person cannot produce sweat. If the gene codes for a protein that stops growth at a certain point, then the body can grow. It just won't stop growing, like in acromegaly. There are a variety of genetic diseases. AFAIK, several of them don't produce a certain protein, and it is entirely possible that they do so, because the process of building the protein only gets half-way and then stops due to a corruption in the gene sequence.

So it is entirely possible for corruptions in the gene sequences to occur, and yet still, the foetus can grow to become a baby, and even live for a number of years.

The only way I can think of, to prove this is not possible, is either that there is found a specific mechanism that clearly tests every gene in the foetal DNA for correct syntax, or that every genetic abnormality that does not miscarry, is found to be cases where the genes work to produce a finished result, just not to do the job that was intended, in the way that was intended, and never where the genes either cannot start working, or can only work part of the way and then stop before the correctly-placed stop codon is reached.

I have not heard or read that either conclusion has been discovered to be true.


I prefer to consider things using logic.
Then what is keeping your from demonstrating that preference in your posts?
Again, you remind me of those who displayed the psychological profile you seem to me to be displaying. They too would say that I wasn't being logical. I then had to fully explain my logic, as if talking to a 1-year-old. It would take several hours to explain what seemed to me, and to others, to be a pretty simple conclusion, by examining each of their points of disbelief, and then, for each, realising that they were either from the perspective of another set of axioms that were in contradiction with the ones I stated at the beginning of my statements, or that they were self-contradictory, and could never happen, and then I had to explain why the point wasn't relevant to my conclusion, or possible, to the person. I had to do that, with each of their objections. It was very tiring. But in the end, they'd usually say I was right.

However, after doing that for 30 years, I tend to avoid that now. People who regularly have those objections, either decide to trust me implicitly after several times, even when they can see objections, or the effort required to explain what needs to be done, is more lengthy and thus costly, than the total time and cost of the entire task from beginning to end. If it continues for several times, I realise that I am better off simply not engaging with that person in that subject, and if in work, it's going to be far more profitable working with someone else, even if the person is a moron, who requires to be explained the tiniest detail. At least then, I don't have to explain for several hours for every 5 minutes of thought.

I am beginning to tire of this. I will see. But I might choose to ignore your posts in the future, if this is the length of dialogue that is required to grasp a single simple point. I might not. But it's starting to look like I have to labour the point, and there is a point at which even talking to idiots is more productive than such dialogue.
 abelian
Joined: 1/12/2008
Msg: 161
IQ is a garbage tool for determining intelligence
Posted: 7/27/2010 4:24:00 PM

Did you mean that conservation of charge is a phenomenon that was never assumed until recently, or did you mean that the conservation of charge was taken for granted, and so Maxwell's motivations existed equally in everyone? If everyone had the same motivation, to add the same term, then why didn't everyone add it? It suggested to me that you are trying to downplay Maxwell's achievement.

For someone who claims to know about this, you don't seem to know much at all. Why don't you just go look up Maxwell's equations and figure out exactly what you're talking about before pontificating about it.

They too would say that I wasn't being logical. I then had to fully explain my logic, as if talking to a 1-year-old.

Spend less time telling us how brilliant you think you are and put a little effort into an attempt to make that self-evident in at least one post that doesn't ramble off topic and back to your favorite subject - your own opinion of how brilliant you are. If you have self-esteem issues, see a psychologist, since I'm not being paid to indulge you.

<plonk
 desertrhino
Joined: 11/30/2007
Msg: 162
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/27/2010 6:58:35 PM
Just for the record: Abelian generally (and in this case) makes sense, and Scorpio generally (and particularly in this case) doesn't make nearly as much sense. I think it has to do with Scorpio's self-proclaimed non-standard world-view and information processing... He gets off on these incredible tangents that mean nothing to anyone but Scorpio, and he CANNOT let go of them or admit they might be skew or outright flawed. That's pretty clearly what's happening here.
 60to70
Joined: 7/28/2008
Msg: 163
IQ is a garbage tool for determining intelligence
Posted: 7/27/2010 11:57:15 PM
What have all these equations and diatribes and distillations of current and past knowledge have anything to do whatsoever with sex and the birth of a child. Oh wait...this kid is fifty percent mother, fifty percent father and the rest is just about the mix. Yawn. Go have a baby and be quiet. Not really...you all remind me of somebody out watching the stars and approximating the distances between. Its good to know..but NOT a basis to build your life on. Most people do know how to honour the specialness of experience without the details. Look...then ...wonder...then...calculate...then...wonder. A good recipe. lol. lol.
 abelian
Joined: 1/12/2008
Msg: 164
IQ is a garbage tool for determining intelligence
Posted: 7/28/2010 12:18:44 AM
What have all these equations and diatribes and distillations of current and past knowledge have anything to do whatsoever with sex and the birth of a child.

Quite a lot actually. If it weren't for quantum mechanics and some intellectual curiosity about the magnetic moments of protons and electrons, there would be no MRI machines. If it weren't for the equations describing the classical doppler shift, there would be no diagnostic use of ultrasound. The lack of those things would mean poorer prenatal care for women having babies and eliminate the ability of physicians to diagnoase and correct a problem before a baby is born. Those are only two examples of what that has to do with sex and babies. How many more would like?

Not really...you all remind me of somebody out watching the stars and approximating the distances between. Its good to know..but NOT a basis to build your life on.

If no one did those things, you wouldn't be able to park your butt on the couch and wax inane through your computer to people on the internet. I really feel sorry for people like you who are unable to appreciate nature in any but the most superficial way.
 nipoleon
Joined: 12/27/2005
Msg: 165
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/29/2010 11:10:06 AM
When I was a kid, it was inadvisable to tell children what their IQ was.
My father did once reveal to me that my IQ was well above 100.

But, I don't know if he meant....... well above 100 ?
Or if he meant, well....... above 100 .
 scorpiomover
Joined: 4/19/2007
Msg: 166
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/30/2010 7:24:02 AM
RE Msg: 219 by abelian:

Did you mean that conservation of charge is a phenomenon that was never assumed until recently, or did you mean that the conservation of charge was taken for granted, and so Maxwell's motivations existed equally in everyone? If everyone had the same motivation, to add the same term, then why didn't everyone add it? It suggested to me that you are trying to downplay Maxwell's achievement.
For someone who claims to know about this, you don't seem to know much at all. Why don't you just go look up Maxwell's equations and figure out exactly what you're talking about before pontificating about it.
I did. I watched Jim Al Khalili and Stephen Hawking rave about Maxwell, who are theoretical physicists who are very famous in the UK. I didn't quite understand why. But due to your urgin, I looked up Ampere's Law with Maxwell's correction, and put a bit more effort into attempting to understand what Maxwell was adding. However, the more I think about what I've read, the more it blows me away.

Ampere's law is quite straightforward: magnetic field strength is dependent on the strength of the electric current. You were right to point out that Ampere's Law without Maxwell's correction shows an inconsistency with the conservation of charge. Ampere's Law only really looks at situations where the charge density is constant.

Maxwell's correction is described as explaining a displacement current, that calculates the extra effect of a change in the electric field on the magnetic field strength.

So naturally, I have a few problems, things I need to think about:

1) Ampere's law makes sense. Why would it show a violation of conservation of charge? There is obviously a disconnect between the way electro-magnetism is popularly portrayed, and the way it really works, that means that the way it is portrayed would violate the basic rule of conservation of charge.

2) Maxwell's correction is very specific, mathematically, in the formula. But the way a formula is written speaks directly as to how the laws of physics work.

I've realised that Maxwell's correction, though short and to the point, had to reflect the laws of physics, and so each symbol used, tells me reams about the fundamental nature of the behaviour of electro-magnetism.

It's also started me thinking in entirely different ways about gravity, that might explain it in sub-atomic situations, in ways that are even easier to understand about the normal world than I was taught.

It looks to me as if it could reveal as much to me about physics, as Cantor's Theorem has about mathematics, and that has completely revolutionised my view of mathematics, maybe even more than Einstein's theory of relativity has totally changed the way I see the world.

I don't want to go into specifics here, because frankly, my mind is exploding with possibilities, and I don't want to go off half-cocked, particularly with you. I think I really do need to read Maxwell's paper, in the original text, and take my time over it, to really understand it well. So if you want to say more about the displacement effect, how it works in detail, and what it teaches us about the fundamentals of electro-magnetis, then by all means. But I'd rather wait, until I've read the original paper.

But if past experience is true to form, Maxwell's correction could teach me 1000 times what Jim Al-Khalili and Stephen Hawking claimed.

I have to thank you most sincerely.

Spend less time telling us how brilliant you think you are and put a little effort into an attempt to make that self-evident in at least one post that doesn't ramble off topic
I try. But I have so many thoughts, and it takes me a huge amount of time to condense my thoughts down into a very short and concise point, and yet is still long enough for people to understand easily. I usually end up making things way too long, or way too short for others to understand.

I have to be realistic, though. If I aimed to wait until I'm as good at being concise and clear as you want, then I'd be 70 before I posted a single post. Perfection is a road, not a destination. So I write my posts, re-write them a few times, and then post, knowing that they still could be improved.

If you have self-esteem issues, see a psychologist,
I already chose to work on that aspect of myself. I'm not relying on you for that. You're not being asked to indulge me at all.

since I'm not being paid to indulge you.
I can do that. But smart people hate it. If I take my time to condense my thoughts into very a clear, concise point, that takes me a lot of time. During that time, I'm usually coming up with tons of ways of looking at the same problem. So the more I make my posts easy for you to read, the longer I think about it, and the better my analysis of the issue. Usually, if I take enough time to condense it properly, I've considered so many angles, that my single clear and concise point takes into account everything that smart people have contributed that is true, and shows abundantly clearly the points that they made that are not true, to such an extent, that everything they've said looks either obviously idiotic, or almost totally superfluous, and that drives smart people crazy, because they feel they might as well not exist.

So I'm not going to just wait until I've got my points concise and clear enough that they are easy to read, or you'll be positively livid.

Again, I have to thank you for your posts, for you have shown me that I really have something really worth sinking my teeth into, with Maxwell's correction.
 scorpiomover
Joined: 4/19/2007
Msg: 167
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/30/2010 7:24:52 AM
RE Msg: 220 by desertrhino:
Just for the record: Abelian generally (and in this case) makes sense, and Scorpio generally (and particularly in this case) doesn't make nearly as much sense.
Can I just add, "in your opinion"? Not everyone agrees with everything you believe, you know.

I think it has to do with Scorpio's self-proclaimed non-standard world-view and information processing...
I agree. I do think of things from angles few have even thought about in the present time.

He gets off on these incredible tangents that mean nothing to anyone but Scorpio,
I agree that I'm not as clear as I could be. I can explain them clearer. But that usually involves either condensing them to a single point, which no-one but me understands, or expanding them to several pages, which is more than most are prepared to read. Getting the right balance between the two, seems to take me an incredible amount of time.

and he CANNOT let go of them or admit they might be skew or outright flawed. That's pretty clearly what's happening here.
I just wrote that I'm going to read Stephen Pinker's book before I post on the topic, to see if I am wrong.
 nipoleon
Joined: 12/27/2005
Msg: 168
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/30/2010 12:23:38 PM

IQ is a garbage tool for determining intelligence

Not intelligence but accomplishment.

Van Goethe was undoubtedly more intelligent than Newton but Newton is better remembered for his accomplishments.
Newton did more with what he had.
 sosdd
Joined: 12/14/2009
Msg: 169
IQ is a garbage tool for determining intelligence
Posted: 7/30/2010 2:40:33 PM
Can't say as I ever met anyone that scored high on the IQ test that wasn't intelligent. I have met people pissed off that they didn't score high and so they said the test was a bunch of crap. They felt they were so much more intelligent and superior, so they knocked the test instead of comprehending that it is just a tool to measure where you are.
 nipoleon
Joined: 12/27/2005
Msg: 170
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/30/2010 6:00:40 PM

Oh yes. Because calculus is less complicated than painting

Perhaps you misunderstood.
I was talking about the great German poet, philosopher, scientist, Johann Van Goethe.... not the Dutch painter Vincent Van Goth.
 aremeself
Joined: 12/31/2008
Msg: 171
view profile
History
IQ is a garbage tool for determining intelligence
Posted: 7/30/2010 10:06:58 PM
a person with high IQ improperly channelled would suck, as opposed to an average properly channelled individual.

proper to me does not mean a crowd follower
Show ALL Forums  > Science/philosophy  > IQ is a garbage tool for determining intelligence