Assessing Kurzweil predictions about 2019: the results
126
EDIT: Mean and standard deviation of individidual predictions can be found here.
Thanks to all my brave assessors, I now have the data about Kurzweil's 1999 predictions about 2019.
This was a follow up to a previous assessment about his predictions about 2009, which showed a mixed bag. Roughly evenly divided between right and wrong, which I found pretty good for ten-year predictions:
So, did more time allow for trends to overcome noise or more ways to go wrong? Pause for a moment to calibrate your expectations.
So, did more time allow for more perspective or more ways to go wrong? Well, Kurzweil's predictions for 2019 were considerably worse than those for 2009, with more than half strongly wrong:
Perfect agreement would be a standard deviation of 0; maximum disagreement (half find "1", half find "5") would be a standard deviation of 2. Perfect spread - equal numbers of 1s, 2s, 3s, 4s, and 5s - would have a standard deviation of √2≈ 1.4.
Across the 105 predictions, the maximum standard deviation was 1.7, the minimum was 0 (perfect agreement), and the average was 0.97. So the predictors had a medium tendency to agree with each other.
These predictions were:
The next prediction deemed most accurate (mean of 1.4), is:
The next prediction with the most disagreement (st dev 1.6) is:
I strongly suspect that most people's 1999 predictions about 2019 would have been a lot worse.
Thanks to all my brave assessors, I now have the data about Kurzweil's 1999 predictions about 2019.
This was a follow up to a previous assessment about his predictions about 2009, which showed a mixed bag. Roughly evenly divided between right and wrong, which I found pretty good for ten-year predictions:
So, did more time allow for trends to overcome noise or more ways to go wrong? Pause for a moment to calibrate your expectations.
Methods and thanks
So, for the 2019 predictions, I divided them into 105 separate statements, did a call for volunteers, with instructions here; the main relevant point being that I wanted their assessment for 2019, not for the (possibly transient) current situation. I got 46 volunteers with valid email addresses, of which 34 returned their predictions. So many thanks, in reverse alphabetical order, to Zvi Mowshowitz, Zhengdong Wang, Yann Riviere, Uriel Fiori, orthonormal, Nuño Sempere, Nathan Armishaw, Koen Holtman, Keller Scholl, Jaime Sevilla, Gareth McCaughan, Eli Rose and Dillon Plunkett, Daniel Kokotajlo, Anna Gardiner... and others who have chosen to remain anonymous.The results
Enough background; what did the assessors find? Well, of the 34 assessors, 24 went the whole hog and did all 105 predictions; on average, 91 predictions were assessed by each person, a total of 3078 individual assessments[1].So, did more time allow for more perspective or more ways to go wrong? Well, Kurzweil's predictions for 2019 were considerably worse than those for 2009, with more than half strongly wrong:
Interesting details
The (anonymised) data can be found here[2], and I encourage people to download and assess it themselves. But some interesting results stood out to me:Predictor agreement
Taking a single prediction, for instance the first one:- 1: Computers are now largely invisible. They are embedded everywhere--in walls, tables, chairs, desks, clothing, jewelry, and bodies.
Perfect agreement would be a standard deviation of 0; maximum disagreement (half find "1", half find "5") would be a standard deviation of 2. Perfect spread - equal numbers of 1s, 2s, 3s, 4s, and 5s - would have a standard deviation of √2≈ 1.4.
Across the 105 predictions, the maximum standard deviation was 1.7, the minimum was 0 (perfect agreement), and the average was 0.97. So the predictors had a medium tendency to agree with each other.
Most agreement/falsest predictions
There was perfect agreement on five predictions; and on all of these, the agreed prediction was always "5": "False".These predictions were:
- 51: "Phone" calls routinely include high-resolution three-dimensional images projected through the direct-eye displays and auditory lenses.
- 55: [...] Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.
- 59: The all-enveloping tactile environment is now widely available and fully convincing.
- 62: [...] These technologies are popular for medical examinations, as well as sensual and sexual interactions with other human partners or simulated partners.
- 63: [...] In fact, it is often the preferred mode of interaction, even when a human partner is nearby, due to its ability to enhance both experience and safety.
Most accurate prediction:
With a mean score of 1.3, the prediction deemed most accurate was:- 83: The existence of the human underclass continues as an issue.
The next prediction deemed most accurate (mean of 1.4), is:
- 82: People attempt to protect their privacy with near-unbreakable encryption technologies, but privacy continues to be a major political and social issue with each individual's practically every move stored in a database somewhere.
Least agreement
With a standard deviation of 1.7, the predictors disagreed the most on this prediction:- 37: Computation in general is everywhere, so a student's not having a computer is rarely an issue.
The next prediction with the most disagreement (st dev 1.6) is:
- 16: Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices.
Most "Cannot Decide"
This prediction had more than 46% of predictors choosing "Cannot Decide":- 20: It is now fully recognized that the brain comprises many specialized regions, each with its own topology and architecture of interneuronal connections.
A question of timeline?
It's been suggested that Kurzweil's predictions for 2009 are mostly correct in 2019. If this is the case - Kurzweil gets the facts right, but the timeline wrong - it would be interesting to revisit these predictions in 2029 (if he is a decade optimistic) and 2039 (if he expected things to go twice as fast). Though many of his predictions seem to be of the type "once true, always true", so his score should rise with time, assuming continuing technological advance and no disasters.In conclusion
Again, thanks to all the volunteers who assessed these predictions and thanks to Kurzweil who, unlike most prognosticators, had the guts and the courtesy to write down his predictions and give them a date.I strongly suspect that most people's 1999 predictions about 2019 would have been a lot worse.
- Five
assessments of the 3078 returned question marks; I replaced these with
"3" ("Cannot Decide"). Four assessors of the 34 left gaps in their
predictions, instead of working through the randomly ordered
predictions; to two significant figures, excluding these four didn't
change anything, so I included them all. ↩︎
- Each column is an individual predictor, each row an individual prediction. ↩︎
126
17 comments, sorted by
top scoring
Highlighting new comments since Today at 2:12 AM
New Comment
prediction after seeing the 2009 graph:
15-20% True
8-12% Weakly True
8-12% Undecided
20-25% Weakly False
35-45% False
This was basically taking the 2009 graph and skewing it to the right, pivoting on Undecided. It was still too optimistic.
True: 15-20% vs 12%
Weakly True: 8-12% vs 12%
Undecided: 8-12% vs 10%
Weakly False: 20-25% vs 15%
False: 35-45% vs 50%
Looking at the discrepancy, it doesn't seem like any systematic skewing adjustment, i.e. adjusting for overconfidence, would have gotten good results. (The closest would be one that pivoted on 'Weakly False'.) A better model would be assuming that all predictions had a modest chance of being other-than-totally-false which was uniformly distributed over degree of truth, and most were totally false.
Therefore, I predict that if these predictions are examined again in 10 or 20 year's time, they will still have this uniform distribution over degree of truth property, though presumably a higher chance of not being totally false.
15-20% True
8-12% Weakly True
8-12% Undecided
20-25% Weakly False
35-45% False
This was basically taking the 2009 graph and skewing it to the right, pivoting on Undecided. It was still too optimistic.
True: 15-20% vs 12%
Weakly True: 8-12% vs 12%
Undecided: 8-12% vs 10%
Weakly False: 20-25% vs 15%
False: 35-45% vs 50%
Looking at the discrepancy, it doesn't seem like any systematic skewing adjustment, i.e. adjusting for overconfidence, would have gotten good results. (The closest would be one that pivoted on 'Weakly False'.) A better model would be assuming that all predictions had a modest chance of being other-than-totally-false which was uniformly distributed over degree of truth, and most were totally false.
Therefore, I predict that if these predictions are examined again in 10 or 20 year's time, they will still have this uniform distribution over degree of truth property, though presumably a higher chance of not being totally false.
Strong upvote for writing your own predictions before seeing the 2019 graph.
Curated.
I think "futurism with good epistemics" is pretty hard, and pretty important. The LessWrong zeitgeist is sort of "Post Kurzweil" – his predictions aren't the ones that we'll personally be graded on. But, I think the act of methodically looking over his predictions helps us orient on the predictions we're making.
I think a) it offers a cautionary tale of mistakes we might be making, and b) I think the act of having a strong tradition of evaluating long-past predictions (hopefully?) helps ward off bullshit. (i.e. many pundits make predictions which skew towards 'locally sound exciting and impressive' because they don't expect to be called on it later)
It's also interesting to note how much disagreement there was over some predictions.
One question I came away with:
I think "futurism with good epistemics" is pretty hard, and pretty important. The LessWrong zeitgeist is sort of "Post Kurzweil" – his predictions aren't the ones that we'll personally be graded on. But, I think the act of methodically looking over his predictions helps us orient on the predictions we're making.
I think a) it offers a cautionary tale of mistakes we might be making, and b) I think the act of having a strong tradition of evaluating long-past predictions (hopefully?) helps ward off bullshit. (i.e. many pundits make predictions which skew towards 'locally sound exciting and impressive' because they don't expect to be called on it later)
It's also interesting to note how much disagreement there was over some predictions.
One question I came away with:
It's been suggested that Kurzweil's predictions for 2009 are mostly correct in 2019.Is this well established? Is there a previous writeup that argues this, or just a general feel? I'd be interested in applying the same methology to the old 2009 predictions and check if they're true.
https://www.futuretimeline.net/forum/topic/17903-kurzweils-2009-is-our-2019/ , forwarded to me by Daniel Kokotajlo (I added a link in the post as well).
I tried doing a PCA of the judgments, to see if there was any pattern in how the predictions were judged. However, the variance of the principal components did not decline fast. The first component explains just 14% of the variance, the next ones 11%, 9%, 8%... It is not like there are some very dominant low-dimensional or clustering explanation for the pattern of good or bad predictions.
No clear patterns when I plotted the predictions in PCA-space: https://www.dropbox.com/s/1jvhzcn6ngsw67a/kurzweilpredict2019.png?dl=0 (In this plot colour denotes mean assessor view of correctness, with red being incorrect, and size the standard deviation of assessor views, with large corresponding to more agreement). Some higher order components may correspond to particular correlated batches of questions like the VR ones.
(Or maybe I used the Matlab PCA routine wrong).
No clear patterns when I plotted the predictions in PCA-space: https://www.dropbox.com/s/1jvhzcn6ngsw67a/kurzweilpredict2019.png?dl=0 (In this plot colour denotes mean assessor view of correctness, with red being incorrect, and size the standard deviation of assessor views, with large corresponding to more agreement). Some higher order components may correspond to particular correlated batches of questions like the VR ones.
(Or maybe I used the Matlab PCA routine wrong).
Plot visualised:
The hypothesis that Kurzweil is basically right but off by 10 years (and thus, that these predictions will be mostly true in 2029) seems less plausible to me than the hypothesis that Kurzweil is basically right but thinks everything will happen twice as fast as it does (and thus, that these predictions will be mostly true in 2039). I'd give the first hypothesis about 15% credence and the second about 25%.
Edited my post to reflect this possibility.
Personally, I don't think there will be anything consistent like that; just some predictions right, some premature, some wrong. I note that most of the predictions seem to be of the type "once true, always true".
Personally, I don't think there will be anything consistent like that; just some predictions right, some premature, some wrong. I note that most of the predictions seem to be of the type "once true, always true".
I predicted the graph would be similar; and I was indeed much too optimistic. In fact, I was so wrong it reminded me of this exact issue, where your predictive ability becomes more and more impossibly difficult every decade or so off you go. It may also be getting even more difficult given our progress as a large unit of populations, but the underlying humanistic predictions may still well be possible, since that variable never quite leaves us.
If anyone has read 'Where To?' by Robert Heinlein, he makes predictions in 1950 and then updated his predictions in 1965 and 1980. Here is an excerpt on an alternative site since I don't have a direct link to the book itself, but if one is interested in the predictive ability of a well-educated individual, it is a very intriguing read, and archiving perspectives lets us understand just how absurd one idea could be, and yet how much further than it we got in a much shorter time frame than assumed. It focuses on technology; and the potential of certain science fiction possibilities, some of which were undershot, but most overshot. On average, he was almost entirely wrong, but only by some degrees of relevancy. Here is the excerpt of the book, and since it only extends to 1980, you can insert your own realizations of how the predictions went for 2000, and now 2020. There are many optimistic predictions made, and a few pessimistic ones. Being an aeronautical engineer and author, Heinlein possibly saw too much potential in our ability to progress that science; the same flaw would probably be present in any predictions made, since we know most well our favored subjects, and want to believe in their potentials.
I recall Alan Watts made some very interesting comments that essentially predicted a lot of our current access to phones, internet, information, etc. It isn't difficult to imagine how far we may come or how easy it may be when progress is not impeded. What is likely absurdly difficult to predict is how Google loading in four seconds instead of two can make someone upset to the point they may seethe or clench their fists. And yet, I have done that more than once, or seen that small red x icon, and then gotten similarly upset in private. The entire world is available to me, and two seconds or so of inconvenience, or possibly a modem reset, being an antagonizing factor is only context-driven. At what point does the time saving become unnoticeable; perhaps the exponential growth of a factor of our lives shouldn't be considered, or the understanding and availability of it, but the brand new frustrations and reasons for emotion they bring. Roadrage is a common example, and that comes from the novel nature of driving wearing off from monotony. It was great to drive the first few weeks and the freedom was liberating, until that freedom was stripped because I'm sitting on the I-5 and no one is dying, it's just 5PM and everyone is going home.
An optimistic prediction would be made to 'ease' that frustration- flying cars, better infrastructure, etc- but what about the new frustrations of those predictions? Having to deal with the FAA instead of the police every time you want to go eat? The city becoming sprawling and difficult to navigate, but very streamlined and without 'stops', or even simply every trip becoming much longer as the concept of not slowing down is pushed? Although contradictory, going faster when you're taking a longer route is not always favorable, but to an impatient crowd, it may solve the more pressing issue: no one will be able to get out of the car to yell at the guy behind them!
If anyone has read 'Where To?' by Robert Heinlein, he makes predictions in 1950 and then updated his predictions in 1965 and 1980. Here is an excerpt on an alternative site since I don't have a direct link to the book itself, but if one is interested in the predictive ability of a well-educated individual, it is a very intriguing read, and archiving perspectives lets us understand just how absurd one idea could be, and yet how much further than it we got in a much shorter time frame than assumed. It focuses on technology; and the potential of certain science fiction possibilities, some of which were undershot, but most overshot. On average, he was almost entirely wrong, but only by some degrees of relevancy. Here is the excerpt of the book, and since it only extends to 1980, you can insert your own realizations of how the predictions went for 2000, and now 2020. There are many optimistic predictions made, and a few pessimistic ones. Being an aeronautical engineer and author, Heinlein possibly saw too much potential in our ability to progress that science; the same flaw would probably be present in any predictions made, since we know most well our favored subjects, and want to believe in their potentials.
I recall Alan Watts made some very interesting comments that essentially predicted a lot of our current access to phones, internet, information, etc. It isn't difficult to imagine how far we may come or how easy it may be when progress is not impeded. What is likely absurdly difficult to predict is how Google loading in four seconds instead of two can make someone upset to the point they may seethe or clench their fists. And yet, I have done that more than once, or seen that small red x icon, and then gotten similarly upset in private. The entire world is available to me, and two seconds or so of inconvenience, or possibly a modem reset, being an antagonizing factor is only context-driven. At what point does the time saving become unnoticeable; perhaps the exponential growth of a factor of our lives shouldn't be considered, or the understanding and availability of it, but the brand new frustrations and reasons for emotion they bring. Roadrage is a common example, and that comes from the novel nature of driving wearing off from monotony. It was great to drive the first few weeks and the freedom was liberating, until that freedom was stripped because I'm sitting on the I-5 and no one is dying, it's just 5PM and everyone is going home.
An optimistic prediction would be made to 'ease' that frustration- flying cars, better infrastructure, etc- but what about the new frustrations of those predictions? Having to deal with the FAA instead of the police every time you want to go eat? The city becoming sprawling and difficult to navigate, but very streamlined and without 'stops', or even simply every trip becoming much longer as the concept of not slowing down is pushed? Although contradictory, going faster when you're taking a longer route is not always favorable, but to an impatient crowd, it may solve the more pressing issue: no one will be able to get out of the car to yell at the guy behind them!
"It is now fully recognized that the brain comprises many specialized regions, each with its own topology and architecture of interneuronal connections."Is this really a prediction? I would call it "A blindingly obvious fact." This page says "Herophilus not only distinguished the cerebrum and the cerebellum, but provided the first clear description of the ventricles", and the putamen and corpus callosum were discovered in the 16th century, etc. etc. Sorry if I'm misunderstanding, I don't know the context.
ETA: Maybe I should be more specific and nuanced. I think it's uncontroversial and known for hundreds if not thousands of years that the brain comprises many regions which look different—for example, the cerebellum, the putamen, etc. I think it's also widely agreed for 100+ years that each is "specialized", at least in the sense that different regions have different functions, although the term is kinda vague. The idea that "each [has] its own topology and architecture of interneuronal connections" is I think the default assumption ... if they had the same topology and architecture, why would they look different? And now that we know what neurons are and have good microscopes, this is no longer just a default assumption, but an (I think) uncontroversial observation.
Here's the context:
[...] Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices. Three-dimensional nanotube lattices are now a prevalent form of computing circuitry.
The majority of "computes" of computers are now devoted to massively parallel neural nets and genetic algorithms.
Significant progress has been made in the scanning-based reverse engineering of the human brain. It is now fully recognized that the brain comprises many specialized regions, each with its own topology and architecture of interneuronal connections. The massively parallel algorithms are beginning to be understood, and these results have been applied to the design of machine-based neural nets. It is recognized that the human genetic code does not specify the precise interneuronal wiring of any of the regions, but rather sets up a rapid evolutionary process in which connections are established and fight for survival. The standard process for wiring machine-based neural nets uses a similar genetic evolutionary algorithm. [...]
Thanks¸ that actually helps a lot, I didn't get that it was from the voice of someone in the future. I still don't see any way to make sense of that as a "prediction", i.e. something that is true but was not fully recognized in 1999.
The closest thing I can think of that would make sense is if he were claiming that the neocortex comprises many specialized regions, each with its own topology and architecture of interneuronal connections (cf zhukeepa's post a couple days ago). But that's not it. Not only would Kurzweil be unlikely to say "brain" when he meant "neocortex", but I also happen to know that Kurzweil is a strong advocate against the idea that the neocortex comprises many architecturally-different regions. Well, at least he advocated for cortical uniformity in his 2012 book, and when I read that I also got the impression that he had believed the same thing for a long time before that.
I think he put that in and phrased it as a prediction just for narrative flow, while setting up the subsequent sentences ... like if he had written
"It is now fully recognized that every object is fundamentally made out of just a few dozen types of atoms. Therefore, molecular assemblers with the right feedstock can make any object on demand..."
or whatever. The first sentence here is phrased as a prediction but it isn't really.
The closest thing I can think of that would make sense is if he were claiming that the neocortex comprises many specialized regions, each with its own topology and architecture of interneuronal connections (cf zhukeepa's post a couple days ago). But that's not it. Not only would Kurzweil be unlikely to say "brain" when he meant "neocortex", but I also happen to know that Kurzweil is a strong advocate against the idea that the neocortex comprises many architecturally-different regions. Well, at least he advocated for cortical uniformity in his 2012 book, and when I read that I also got the impression that he had believed the same thing for a long time before that.
I think he put that in and phrased it as a prediction just for narrative flow, while setting up the subsequent sentences ... like if he had written
"It is now fully recognized that every object is fundamentally made out of just a few dozen types of atoms. Therefore, molecular assemblers with the right feedstock can make any object on demand..."
or whatever. The first sentence here is phrased as a prediction but it isn't really.
I didn't judge whether it was plausible or trivial; I just took out every thing that was formulated as a prediction for the future.
It looks like two of the predictions, that the majority of teacher-student interactions would be remote and that the majority of meetings would be remote, have flipped from false to true between 2019 and 2020, but because of a global pandemic rather than directly proceeding from advancements in technology.
One of the things I find really hard about tech forecasting is that most of tech adoption is driven by market forces / comparative economics ("is solar cheaper than coal?"), but raw possibility / distance in the tech tree is easier to predict ("could more than half of schools be online?"). For about the last ten years we could have had the majority of meetings and classes online if we wanted to, but we didn't want to--until recently. Similarly, people correctly called that the Internet would enable remote work, in a way that could make 'towns' the winners and 'big cities' the losers--but they incorrectly called that people would prefer remote work to in-person work, and towns to big cities.
[A similar thing happened to me with music-generation AI; for years I think we've been in a state where people could have taken off-the-shelf method A and done something interesting with it on a huge music dataset, but I think everyone with a huge music dataset cares more about their relationship with music producers than they do about making the next step of algorithmic music.]
[A similar thing happened to me with music-generation AI; for years I think we've been in a state where people could have taken off-the-shelf method A and done something interesting with it on a huge music dataset, but I think everyone with a huge music dataset cares more about their relationship with music producers than they do about making the next step of algorithmic music.]
for years I think we've been in a state where people could have taken off-the-shelf method A and done something interesting with it on a huge music datasetAbsolutely. I got decent enough results just tinkering with GPT-2, and OpenAI's Jukebox could have been done at smaller scale years ago, and OA could presumably do a lot better right now if they had a few million to spare (Jukebox has only ~7b parameters, while GPT-3 has 175b, and Jukebox is pretty close to human-level so just another 10x seems like it'd make it an extremely useful tool commercially).
Browsing Wikipedia, a similar effort was the 1985 book Tools for thought, (available here), though I haven't read it.