The timing of this article and the submission seems to coincide (and possibly a reaction) to the other story on HN frontpage: Working quickly is more important than it seems (2015) (jsomers.net)
To clarify, some are misunderstanding James Somers to be advocating sloppy low quality work, as if he's recommending speed>quality. He's saying something else: remove latencies and delays to shorten feedback loops. Faster feedback cycles leads to more repetitions which leads to higher quality.
"slowness being a virtue" is not the opposite of Somer's recommendation about "working quickly".
Correct, it is the speed of iteration that is important. [0]
If AI can do the OODA loop faster without getting fatigued, even though it is worse quality, like the F-86, it will win 10 out of 10 times.
EDIT:
> Boyd knew both planes very well. He knew the MiG-15 was a better aircraft than the F-86. The MiG-15 could climb faster than the F-86. The MiG-15 could turn faster than the F-86. The MiG-15 had better distance visibility.
> The F-86 had two points in its favor. First, it had better side visibility. While the MiG-15 pilot could see further in front, the F-86 pilot could see slightly more on the sides. Second, the F-86 had a hydraulic flight control. The MiG-15 had a manual flight control.
> Boyd decided that the primary determinant to winning dogfights was not observing, orienting, planning, or acting better. The primary determinant to winning dogfights was observing, orienting, planning, and acting faster.
> Without hydraulics, it took slightly more physical energy to move the MiG-15 flight stick than it did the F-85 flight stick. Even though the MiG-15 would turn faster (or climb higher) once the stick was moved, the amount of energy it took to move the stick was greater for the MiG-15 pilot.
> With each iteration, the MiG-15 pilot grew a little more fatigued than the F-86 pilot. And as he gets more fatigued, it took just a little bit longer to complete his OOPA loop. The MiG-15 pilot didn’t lose because he got outfought. He lost because he got out-OOPAed.
Totally agree, how I see it, it's related to taking time to sharpen your axe.
Having a defined flow that gives you quick feedback quick and doesn't get in the way.
I you are writing, then you'd be using an app that you can quickly do what you want, e.g shortcuts for bold, vim/emacs motions, that "things-not-getting-in-the-way" state is what leads to flow state, in my opinion.
Muscle memory is action for free, then you can focus on thinking deeper.
Same happens with coding, although is more complex and can take time to land in a workflow with tools that allow you to move quick, I'm talking about, logs, debugger (if needed), hot reloading of the website, unit test that run fast, knowing who to ask or where to go for finding references, good documentation, good database client, having prepared shortcuts to everything ... and so on.
I think it would be could if people would share their flow-tools with different tech stacks, could benefit a lot of us that have some % of this done, but not 100% there yet.
To add, add some "slowness" before starting work - fix the latencies and delays, and plan what you're going to make instead of figuring it out as you go.
I think there are two separate things. Slowness of progress in research is good bc it signals high value/difficulty. This I wholeheartedly agree. The other is, the slowness of solving a given problem is good, which is less clear.
I think indubitably intelligence should be linked to speed. If you can since everything faster I think smarter is a correct label. What I also think is true is that slowness can be a virtue in solving problems for a person and as a strategy. But this is usually because fast strategies rely on priors/assumptions and ideas which generalize poorly; and often more general and asymptotically faster algorithms are slower when tested on a limited set or on a difficulty level which is too low
I haven’t looked into the source study, so who knows if it’s good, but I recall this article about smart people taking longer to provide answers to hard problems because they take more into consideration, but are much more likely to be correct.
Great article. I like the simple point about the hypothetical IQ test sent one week in advance. It makes a strong case about time being the true bottleness. I think this same idea could be applied to most tests.
Implicit in the design of most tests is the idea that a person's ability to quickly solve moderately difficult problems implies a proportional ability to solve very difficult problems if given more time. This is clearly jumping to a conclusion. I doubt there is any credible evidence to support this. My experience tends to suggest the opposite; that more intelligent people need more time to think because their brains have to synthesize more different facts and sources of information. They're doing more work.
We can see it with AI agents as well; they perform better when you give them more time and when they consider the problem from more angles.
It's interesting that we have such bias in our education system because most people would agree that being able to solve new difficult problems is a much more economically valuable skill than being able to quickly solve moderate problems that have already been solved. There is much less economic and social value in solving problems that have already been solved... Yet this is what most tests select for.
It reminds me of the "factory model of schooling." Also there is a George Carlin quote which comes to mind:
"Governments don't want a population capable of critical thinking, they want obedient workers, people just smart enough to run the machines and just dumb enough to passively accept their situation."
I suspect there may be some correlation between High IQ, fast thinking, fast learning and suggestibility (meaning insufficient scrutiny of learned information). What if fast learning comes at the expense of scrutiny? What if fast thinking is tested for as a proxy for fast learning?
What if the tests which our society and economy depend on ultimately select for suggestibility, not intelligence?
>most people would agree that being able to solve new difficult problems is a much more economically valuable skill than being able to quickly solve moderate problems that have already been solved
Do most people agree with that? I agree with that completely, and I have spent a lot of time wishing that most people agreed with that. But my experience is that almost no one agrees with that...ever...in any circumstance.
I don't even think society as a whole agrees with this statement. If you just rank careers according to the ones that have the highest likelihood of making the most money, the most economically valuable tend to be the ones solving medium difficulty problems quickly.
"Implicit in the design of most tests is the idea that a person's ability to quickly solve moderately difficult problems implies a proportional ability to solve very difficult problems if given more time."
I used to share that doubt, especially during my first semesters at university.
However, my experience over the decades has been, that people who solved moderately difficult problems quickly were also the ones that excelled at solving hard and novel problems. So in my (little) experience, there is a justification for that and I'd be definitely interested (and not surprised) to see credible evidence for it.
> I like the simple point about the hypothetical IQ test sent one week in advance.
It’s a simple point but an incorrect one.
If you can work on it for a week, it’s no longer an IQ test. Nobody is saying that the questions on an IQ test are impossible. It’s the fact that there are constraints (time) and that everybody takes the test the same way that makes it an IQ test. Otherwise it’s just a little sheet of kinda tricky puzzles.
Would you be a better basketball player if everyone else had to heave from 3/4 court but you could shoot layups? No, you’d be playing by different rules in an essentially different game. You might have more impressive stats but you wouldn’t be better.
> Would you be a better basketball player if everyone else had to heave from 3/4 court but you could shoot layups? No, you’d be playing by different rules in an essentially different game. You might have more impressive stats but you wouldn’t be better.
I think the correct analogy here is that if everyone had to shoot from 3/4 court, you would likely end up with a different set of superstars than the set of superstars you get when dunking is allowed.
In other words, if the IQ test were much much harder, but you had a month to do it, you might find that the set of people who do well is different than who does well on the 1 hour test. Those people may be better suited to pursuing really hard open ended long term problems.
No, I don’t think that is the correct analogy. The analogy in the blog post is that you (one person) gets a month headstart on the test. You would look like a genius because you’d outscore everyone else who had the time constraint.
Yes, if you play a different game you’ll find different high performers. That is obvious. But it is not what the blog post is saying. It is saying if you let one person play the same game but by different rules, they will look better.
> Consider this: if you get access to an IQ test weeks in advance, you could slowly work through all the problems and memorize the solutions. The test would then score you as a genius. This reveals what IQ tests actually measure. It’s not whether you can solve problems, but how fast you solve them.
You retort that "if you can work on it for a week, then it's no longer an IQ test", but that retort is one that the author would agree with. The author is simply making the argument that, what IQ measures is not necessarily the same kind of intelligence as what is necessary for success in the real world. He's not actually arguing that people should be allowed to take as long as they want on the test, he's simply using that hypothetical to illustrate "what IQ tests actually measure".
Yeah, I should clarify - I also don't think the article made the correct analogy. But I more meant that I think the different-game-gets-different-winners-analogy should have been how the article tried to make the point the author ultimately intended.
Counterpoint to consider: In real life, you can just play a different game. Most people will choose to shoot from 3/4 court instead of running all the way to the other end, because they’re not interested in basketball.
Most people aren’t interested enough to work 100+ hours per week. But we wouldn’t say Elon isn’t better at work ”because he doesn’t even work a 40-hour work week”
It has a lot to do with interest. Michael Jordan isn’t a world class mathematician. Elon isn’t a world class father.
I like this post. It reminds me of contemporary programming language research. There are precious few people actually doing interesting stuff in PL research these days that are actually trying to uncover new paradigms that aren't just the same old "we discovered how to do X in Y type system" or "novel technique to generate objects with blah blah constraints".
People doing actually interesting stuff can't get funding, so they have to lone-wolf their entire research or just give up and work on stuff that gets paid, people like
- Jonathan Edwards
- Allen Webster
- Brett Victor
All with seriously intriguing ideas that probably have potential, but nobody seems to want to actually dig in to the stuff. Fortunately, there are guys like Stephen Kell who are kind of doing it even in academia, but I think he's limited too towards working on the boring problems that get funding as well.
"Development is the execution of a map toward a goal while research is the pursuit of a goal without a map".
If there is something I can take from this post, it will be this quote.
The quip about IQ tests might be true for common range IQ tests, but IQ tests that test for very high IQ like the Ultra test [0] are untimed and unsupervised.
If I wasn't in IT I think I'd love the military, not the stupid political stuff and killing people, but the organization, discipline, routine, focus on predictability, protocols, etc.
Yeah it's boring if it all works but boring is good. And we've been trying to apply this to software development for ages as well - think "continuous deployment" practices (or its new name, DORA metrics in the 2020's).
I wish software wasnt like the wild west where everyone can do as they please... Well defined proven standards would be so cool to have, but no, we have like 20 different ways to do auth and none of them are secure, regular switches from favoring SSR to client side rendering back to SSR again. Just to name some examples.
Hmm, I've been doing webdev for a living since 1998, intimately familiar w/ the complete history and modern practices, and I respectful disagree. Pretty sure it's a good thing we're not all forced to do things the same way. And the new SSR with CSR capabilities is not at all the same as the old SSR. You're right that auth is kind of a hot mess though.
> And the new SSR with CSR capabilities is not at all the same as the old SSR
Yea I know its just a display of the industries indecisiveness. Everytime we need something new and fresh some old favorite is revived until after 5 years its old again. I like being able to do things differently, I hate having to implement "security" features knowing all too well that they aren't secure at all. Minimizing attack surface should not be the default. And its not like this is a new problem. For some reason web devs love to work around a problem instead of fixing it.
reminds me of top age of empires II players making tons of clics per minute
the game appears so smooth when watching it as a spectator (without seeing the player's mouse and clicks but only the units moving)
This reminds of a question I had when I played chess for a couple of years. I was a lot better (as evidenced by my ELO score on chess.com) when playing long games (1 turn per day) than short games (say half an hour total).
At the time, I read that everybody is better at "slow" chess. But does that explanation make sense? If everybody is better, shouldn't my ELO score have stayed the same?
With more time the scale of change is very personal. For some people going from 15 minutes to 1 hour gives a massive boost, while other do not improve match. And then some people can loose focus or get distracted during longer plays so for them more time may make they play worse.
unfamiliar with chess.com but correspondence chess(day/move) and rapid (game 30+0) should fall under two different rating classifications. Having different ratings between them is to be expected.
And while people tend to make __better moves__ in slower time controls, their rapid/blitz ratings are usually higher than standard ratings.
I enjoyed this. At my own workplace it's a challenge to fit my team's work into the wider sprint-based methodology where every project must be refined, estimated, and broken down into items with <2 days effort. That makes a certain amount of sense if, say, you're building a standard web portal. It makes less sense if, say, you're adapting modern hierarchical routing algorithms to take vehicle dimension restrictions into account. It's difficult to express just how nebulous this kind of work can be. Managers like to say "Maybe you don't know how long it will take now, but you can research and prototype for a couple of days and have a better idea". The problem is that research work generally takes the following form:
* Come up with 5 possible approaches (2 days)
* Create benchmark framework & suite (1 day)
* Try out approach A, but realise that it cannot work for subtle technical reasons (2 days)
* Try out approach B (2 days)
* Fail to make approach B performant enough (3 day)
...
You just keep trying directions, refining, following hunches, coming up with new things to try etc... until you (seemingly randomly) land on something that works. This is fundamentally un-estimatable. And yet if you're not doing this sort of work, you will rarely come up with truly novel feats of engineering.
Almost everywhere management- whether wearing the skin of agile on top, is still bound to good old school Taylorism. And there is always this complete lack of understanding of the types of work there are and that you cannot cramp work related to complex, novel things in the same way you do something standardized.
Alas, business dont care, cause they wont their estimates and roadmaps and plans, and we all pretend it works ...
Bad article. The thesis may even be valuable, but it’s riddled with falsehoods trying to prove the point. It reads more as the usual person disliking the idea of IQ and trying to bash its foundation. Some actual facts are:
1. Einstein was a great student (as common sense would expect) [1]. Top in his class in ETHZ, and the supposed failed exam is because he tried to do the exam earlier than intended. He had great, although not flawless, grades all the way through. He wasn’t a mindless robot and clearly got some feathers ruffed by not showing up for classes, but his academic record is exactly what you would expect from a brilliant but somewhat nonconformist mind. He may not have been Von Neumann or Terence Tao, I suppose.
2. The main “source” of the article is an even more flawed blog post [2], which again just bashes on IQ with no sliver of proof that I can see other than waving hands in the hair while saying “dubious statistical transformations”, as if that wasn’t the only possible way to do these kinds of tests. Please prove me wrong and show me some proper study in there, I can’t see it but I’m from mobile.
Disappointing. What’s the point of it? Quote actual scientists, for example Higgs, who are on record saying that modern academic culture is too short term focused. Basically everyone I’ve ever spoken to about it in academia agrees. Might be a biased sample, but I think it’s more that everyone realizes we’ve dug ourselves into a hole that’s not so easy to escape.
Good post, but I wish he had delved more into how modern institutions could be revamped to allow for slow, long term thinking.
I think there is an assumption that institutions inherently are short term optimized, but I don’t know if that’s actually true, or merely a more recent phenomenon.
My guess is that you’d need to deliberately be “less than hyper rational” when doling out funding, because otherwise you end up following the metrics mentioned in the post. In other words, you might need to give out income randomly to everyone that meets certain criteria, rather than optimizing for the absolute best choice. The nature of inflation and increasing costs of living also becomes a problem, as whatever mechanism you’re using to fund “long term” work needs to be increasing every year.
> The Buxton Index of an entity, i.e. person or organization, is defined as the length of the period, measured in years, over which the entity makes its plans. For the little grocery shop around the corner it is about 1/2, for the true Christian it is infinity, and for most other entities it is in between: about 4 for the average politician who aims at his re-election, slightly more for most industries, but much less for the managers who have to write quarterly reports. The Buxton Index is an important concept because close co-operation between entities with very different Buxton Indices invariably fails and leads to moral complaints about the partner.
like the idea of the article. however, it gave me bad vibes. this “virtues” only use is to have moral high ground over other “virtues” instead of deconstructing intelligence as a whole.
why is it bad that the person with the highest IQ does puzzle columns?
are all people with IQ supposed to be doing groundbreaking research?
can you only do groundbreaking research if you’re intelligent?
i think the real virtue here is not “slowness” but rather persistence. what do you think?
> are all people with IQ supposed to be doing groundbreaking research?
I don't know about "supposed to", but... it's a reasonable hope or expectation, right? That someone with extraordinary capabilities would want to use them for some extraordinary benefit for mankind. I appreciate vos Savant's contribution to public knowledge, but if you have the ability to make your name by progressing something extremely challenging (like the Riemann hypothesis) then wouldn't you want to try that?
Reminds me of that scene in Good Will Hunting where Sean presses Will on why he sticks to manual labouring when he's far smarter than highly trained university professors.
I agree that it's a reasonable hope but not an expectation. I don't think it's fair to put that type of pressure on someone, and I don't want to assume that's necessarily what you meant.
I don't know if you read "Flowers for Algernon" but that's what I think about when discussing highly/exceptionally intelligent people.
That's totally fair. I didn't mean "expectation" in the sense of social pressure, but rather that it's fairly likely that someone would want to use their skills in that way.
I saw another post here saying speed-work is important. It's neither slow-work or speed-work. Stop making these generic blind rules. Just go by what's needed for the context. Keep your eyes open, not to these kind of rules, but to what's going on around.
All of the fast work will ideally soon be automated, leaving the fast workers with nothing to do but starve. In a righteous world, the slow workers who can change how the fast work is done will ultimately win.
To clarify, some are misunderstanding James Somers to be advocating sloppy low quality work, as if he's recommending speed>quality. He's saying something else: remove latencies and delays to shorten feedback loops. Faster feedback cycles leads to more repetitions which leads to higher quality.
"slowness being a virtue" is not the opposite of Somer's recommendation about "working quickly".
If AI can do the OODA loop faster without getting fatigued, even though it is worse quality, like the F-86, it will win 10 out of 10 times.
EDIT:
> Boyd knew both planes very well. He knew the MiG-15 was a better aircraft than the F-86. The MiG-15 could climb faster than the F-86. The MiG-15 could turn faster than the F-86. The MiG-15 had better distance visibility.
> The F-86 had two points in its favor. First, it had better side visibility. While the MiG-15 pilot could see further in front, the F-86 pilot could see slightly more on the sides. Second, the F-86 had a hydraulic flight control. The MiG-15 had a manual flight control.
> Boyd decided that the primary determinant to winning dogfights was not observing, orienting, planning, or acting better. The primary determinant to winning dogfights was observing, orienting, planning, and acting faster.
> Without hydraulics, it took slightly more physical energy to move the MiG-15 flight stick than it did the F-85 flight stick. Even though the MiG-15 would turn faster (or climb higher) once the stick was moved, the amount of energy it took to move the stick was greater for the MiG-15 pilot.
> With each iteration, the MiG-15 pilot grew a little more fatigued than the F-86 pilot. And as he gets more fatigued, it took just a little bit longer to complete his OOPA loop. The MiG-15 pilot didn’t lose because he got outfought. He lost because he got out-OOPAed.
[0] https://blog.codinghorror.com/boyds-law-of-iteration/
https://news.ycombinator.com/item?id=46270918
Having a defined flow that gives you quick feedback quick and doesn't get in the way.
I you are writing, then you'd be using an app that you can quickly do what you want, e.g shortcuts for bold, vim/emacs motions, that "things-not-getting-in-the-way" state is what leads to flow state, in my opinion.
Muscle memory is action for free, then you can focus on thinking deeper.
Same happens with coding, although is more complex and can take time to land in a workflow with tools that allow you to move quick, I'm talking about, logs, debugger (if needed), hot reloading of the website, unit test that run fast, knowing who to ask or where to go for finding references, good documentation, good database client, having prepared shortcuts to everything ... and so on.
I think it would be could if people would share their flow-tools with different tech stacks, could benefit a lot of us that have some % of this done, but not 100% there yet.
I think indubitably intelligence should be linked to speed. If you can since everything faster I think smarter is a correct label. What I also think is true is that slowness can be a virtue in solving problems for a person and as a strategy. But this is usually because fast strategies rely on priors/assumptions and ideas which generalize poorly; and often more general and asymptotically faster algorithms are slower when tested on a limited set or on a difficulty level which is too low
https://bigthink.com/neuropsych/intelligent-people-slower-so...
Implicit in the design of most tests is the idea that a person's ability to quickly solve moderately difficult problems implies a proportional ability to solve very difficult problems if given more time. This is clearly jumping to a conclusion. I doubt there is any credible evidence to support this. My experience tends to suggest the opposite; that more intelligent people need more time to think because their brains have to synthesize more different facts and sources of information. They're doing more work.
We can see it with AI agents as well; they perform better when you give them more time and when they consider the problem from more angles.
It's interesting that we have such bias in our education system because most people would agree that being able to solve new difficult problems is a much more economically valuable skill than being able to quickly solve moderate problems that have already been solved. There is much less economic and social value in solving problems that have already been solved... Yet this is what most tests select for.
It reminds me of the "factory model of schooling." Also there is a George Carlin quote which comes to mind:
"Governments don't want a population capable of critical thinking, they want obedient workers, people just smart enough to run the machines and just dumb enough to passively accept their situation."
I suspect there may be some correlation between High IQ, fast thinking, fast learning and suggestibility (meaning insufficient scrutiny of learned information). What if fast learning comes at the expense of scrutiny? What if fast thinking is tested for as a proxy for fast learning?
What if the tests which our society and economy depend on ultimately select for suggestibility, not intelligence?
Do most people agree with that? I agree with that completely, and I have spent a lot of time wishing that most people agreed with that. But my experience is that almost no one agrees with that...ever...in any circumstance.
I don't even think society as a whole agrees with this statement. If you just rank careers according to the ones that have the highest likelihood of making the most money, the most economically valuable tend to be the ones solving medium difficulty problems quickly.
I used to share that doubt, especially during my first semesters at university.
However, my experience over the decades has been, that people who solved moderately difficult problems quickly were also the ones that excelled at solving hard and novel problems. So in my (little) experience, there is a justification for that and I'd be definitely interested (and not surprised) to see credible evidence for it.
It’s a simple point but an incorrect one.
If you can work on it for a week, it’s no longer an IQ test. Nobody is saying that the questions on an IQ test are impossible. It’s the fact that there are constraints (time) and that everybody takes the test the same way that makes it an IQ test. Otherwise it’s just a little sheet of kinda tricky puzzles.
Would you be a better basketball player if everyone else had to heave from 3/4 court but you could shoot layups? No, you’d be playing by different rules in an essentially different game. You might have more impressive stats but you wouldn’t be better.
I think the correct analogy here is that if everyone had to shoot from 3/4 court, you would likely end up with a different set of superstars than the set of superstars you get when dunking is allowed.
In other words, if the IQ test were much much harder, but you had a month to do it, you might find that the set of people who do well is different than who does well on the 1 hour test. Those people may be better suited to pursuing really hard open ended long term problems.
Yes, if you play a different game you’ll find different high performers. That is obvious. But it is not what the blog post is saying. It is saying if you let one person play the same game but by different rules, they will look better.
> Consider this: if you get access to an IQ test weeks in advance, you could slowly work through all the problems and memorize the solutions. The test would then score you as a genius. This reveals what IQ tests actually measure. It’s not whether you can solve problems, but how fast you solve them.
You retort that "if you can work on it for a week, then it's no longer an IQ test", but that retort is one that the author would agree with. The author is simply making the argument that, what IQ measures is not necessarily the same kind of intelligence as what is necessary for success in the real world. He's not actually arguing that people should be allowed to take as long as they want on the test, he's simply using that hypothetical to illustrate "what IQ tests actually measure".
Most people aren’t interested enough to work 100+ hours per week. But we wouldn’t say Elon isn’t better at work ”because he doesn’t even work a 40-hour work week”
It has a lot to do with interest. Michael Jordan isn’t a world class mathematician. Elon isn’t a world class father.
People doing actually interesting stuff can't get funding, so they have to lone-wolf their entire research or just give up and work on stuff that gets paid, people like
- Jonathan Edwards
- Allen Webster
- Brett Victor
All with seriously intriguing ideas that probably have potential, but nobody seems to want to actually dig in to the stuff. Fortunately, there are guys like Stephen Kell who are kind of doing it even in academia, but I think he's limited too towards working on the boring problems that get funding as well.
[0] https://megasociety.org/admission/ultra/
Yeah it's boring if it all works but boring is good. And we've been trying to apply this to software development for ages as well - think "continuous deployment" practices (or its new name, DORA metrics in the 2020's).
Yea I know its just a display of the industries indecisiveness. Everytime we need something new and fresh some old favorite is revived until after 5 years its old again. I like being able to do things differently, I hate having to implement "security" features knowing all too well that they aren't secure at all. Minimizing attack surface should not be the default. And its not like this is a new problem. For some reason web devs love to work around a problem instead of fixing it.
"Dress me slowly that I am in a hurry"
Walk slowly and you'll walk safe and far.
Same thing, but from the trades instead of the military.
At the time, I read that everybody is better at "slow" chess. But does that explanation make sense? If everybody is better, shouldn't my ELO score have stayed the same?
For example if I were to give $1 to every person on earth, but $100 million to you, everyone would be richer but you would be a lot richer still.
And while people tend to make __better moves__ in slower time controls, their rapid/blitz ratings are usually higher than standard ratings.
* Come up with 5 possible approaches (2 days)
* Create benchmark framework & suite (1 day)
* Try out approach A, but realise that it cannot work for subtle technical reasons (2 days)
* Try out approach B (2 days)
* Fail to make approach B performant enough (3 day)
...
You just keep trying directions, refining, following hunches, coming up with new things to try etc... until you (seemingly randomly) land on something that works. This is fundamentally un-estimatable. And yet if you're not doing this sort of work, you will rarely come up with truly novel feats of engineering.
1. Einstein was a great student (as common sense would expect) [1]. Top in his class in ETHZ, and the supposed failed exam is because he tried to do the exam earlier than intended. He had great, although not flawless, grades all the way through. He wasn’t a mindless robot and clearly got some feathers ruffed by not showing up for classes, but his academic record is exactly what you would expect from a brilliant but somewhat nonconformist mind. He may not have been Von Neumann or Terence Tao, I suppose.
2. The main “source” of the article is an even more flawed blog post [2], which again just bashes on IQ with no sliver of proof that I can see other than waving hands in the hair while saying “dubious statistical transformations”, as if that wasn’t the only possible way to do these kinds of tests. Please prove me wrong and show me some proper study in there, I can’t see it but I’m from mobile.
Disappointing. What’s the point of it? Quote actual scientists, for example Higgs, who are on record saying that modern academic culture is too short term focused. Basically everyone I’ve ever spoken to about it in academia agrees. Might be a biased sample, but I think it’s more that everyone realizes we’ve dug ourselves into a hole that’s not so easy to escape.
[1]: https://m.youtube.com/watch?v=2zwZsjlJ-G4
[2]: https://www.theintrinsicperspective.com/p/your-iq-isnt-160-n...
I think there is an assumption that institutions inherently are short term optimized, but I don’t know if that’s actually true, or merely a more recent phenomenon.
My guess is that you’d need to deliberately be “less than hyper rational” when doling out funding, because otherwise you end up following the metrics mentioned in the post. In other words, you might need to give out income randomly to everyone that meets certain criteria, rather than optimizing for the absolute best choice. The nature of inflation and increasing costs of living also becomes a problem, as whatever mechanism you’re using to fund “long term” work needs to be increasing every year.
> The Buxton Index of an entity, i.e. person or organization, is defined as the length of the period, measured in years, over which the entity makes its plans. For the little grocery shop around the corner it is about 1/2, for the true Christian it is infinity, and for most other entities it is in between: about 4 for the average politician who aims at his re-election, slightly more for most industries, but much less for the managers who have to write quarterly reports. The Buxton Index is an important concept because close co-operation between entities with very different Buxton Indices invariably fails and leads to moral complaints about the partner.
why is it bad that the person with the highest IQ does puzzle columns? are all people with IQ supposed to be doing groundbreaking research? can you only do groundbreaking research if you’re intelligent?
i think the real virtue here is not “slowness” but rather persistence. what do you think?
I don't know about "supposed to", but... it's a reasonable hope or expectation, right? That someone with extraordinary capabilities would want to use them for some extraordinary benefit for mankind. I appreciate vos Savant's contribution to public knowledge, but if you have the ability to make your name by progressing something extremely challenging (like the Riemann hypothesis) then wouldn't you want to try that?
Reminds me of that scene in Good Will Hunting where Sean presses Will on why he sticks to manual labouring when he's far smarter than highly trained university professors.
I don't know if you read "Flowers for Algernon" but that's what I think about when discussing highly/exceptionally intelligent people.