Hacker News new | past | comments | ask | show | jobs | submit login
Vernor Vinge has died (file770.com)
1151 points by sohkamyung 58 days ago | hide | past | favorite | 320 comments



Please mirror, because more people should have a copy of this: https://3e.org/vvannot

This is Vinge's annotated copy of A Fire Upon the Deep. It has all his comments and discussion with editors and early readers. It provides an absolutely fascinating insight into his writing process and shows the depth of effort he put into making sure everything made sense.


There's an interview with Vinge from 2009 [0] which contains a screenshot [1] of him using Emacs with his home-brewed proto-Org-mode annotation system (which appears in parent's link).

[0]: https://web.archive.org/web/20170215121054/http://www.norwes...

[1]: https://web.archive.org/web/20170104130412/http://www.norwes...


Based on the modeline, it's just text-mode using RCS as version control. It must be a minor mode that he wrote.


Based on what I read in the annotations mentioned above, I didn't see any implication of software support for his special markup beyond using search etc basic text editing features.


Thank you - I love HN for things like this! A Fire Upon the Deep is one of my favorite books/series. RIP


Except for the weird dogs. "A deepness in the sky" is the epitome for me.


Personally I found the chapters about the spiders absurdly boring. They're just big sentient spiders; nothing novel like a group mind. Fortunately the other half of the Deepness plot more than made up for it though.


> Personally I found the chapters about the spiders absurdly boring.

They're giant sentient spiders living in a 50's sitcom. What's not to like?


I quit that book at the dogs. I understand that it's a great book and I ought to pick it up again, but goddamn those chapters were bad. And to be honest, the glaring similarity between early internet and nntp and Vinge's far future networking was distracting. Perhaps I'm misremembering that part...

True Names was pretty great.


I couldn't buy the audio relaying wolf packs either, but enjoyed the other aspects of his fiction.


It's always a little sad when you can tell the things the author finds the most interesting about their fiction aren't the same things you find the most interesting. I wanted a lot more of the blight and the powers and the zones and a lot less dogs talking to children.


He's intentionally trying to make the superintelligences stay superintelligent by not describing them. I think it's pretty effective. He did the same thing in Across Realtime.

His first book Tatja Grimm's World was also about superintelligent people (or the whole rest of the planet except them was sub-intelligent, or something… it's not clear). That one is, hmm, not nearly as good as his later ones. So I think it's just as well he didn't try.


What?! Blasphemy! A computer nerd created a fictional distributed intelligence, and you were not entertained?

Best aliens ever.


Yeah seriously.

The third book -- while not nearly as good as either Fire Upon The Deep or Deepness in the Sky -- had even more fascinating insights into the practical mechanics of a group mind with physical bodies. Like the fact that romance happens on two separate levels!


Thanks for mirroring this! This was only published on an old CD for the '93 Hugo winners, and I had a devil of a time trying to find a copy (inter-library-loan, etc) before realizing someone had archived it on archive.org. It is indeed well worth the time spent if you're a fan of Fire.


Is this annotated version on the archive.org CD? I couldn't find it in https://archive.org/download/hugo_nebula_1993


you have to open hugo.zip, or click on the view contents link beside it


Yes, I did that. I see in there the vinge novel but NOT the version with annotations.


The annotations are there in the RTF files, but there is something quirky about the format of those RTF files - perhaps they predate standardization or something. If you open one of the RTFs in a straight text editor like emacs or vi, you'll see them. There was a bit of discussion around this here, a few years ago, when this version was re-released [1]

[1] https://news.ycombinator.com/item?id=24876236


Oh wow thanks!


This seems to do the trick:

    wget -mkx -e robots=off https://3e.org/vvannot/


The key insight always was hexapodia.


IIRC from the annotations (it's been a while), Vinge did not intend that Twirlip was right about everything; Twirlip was merely meant to a representation of the weird things you used to get on Usenet. But it worked out fairly well. (On the one hand, this might technically be a spoiler, but on the other, I think in practice even knowing this tidbit won't actually give anything away.)

(I'm glad someone linked to this. I actually bought the annotated edition a while back and was reading it back in the Palm Pilot era, I think, but I've lost it and never quite finished it. So I'm happy to see it and have no qualms for myself about grabbing it.)


I'd be pretty comfortable advocating for (metaphorical) Death of the Author on this one if this weren't, you know, a thread about the literal death of the author.


I actually read a Fire Upon The Deep over Christmas, and then went on with the rest. The entire trilogy is pretty amazing.


I really wish he had wrapped everything up.


Eh, I have mixed feelings on that.

There are cliff-hanger endings, or clearly unfinished works, but I don't mind a bit of uncertainty at the edge of the story.

Compare The Wheel of Time to The Lord of the Rings.

At the end of LotR, it is over. Not only is the Dark Lord defeated forever, the elves, the wizards, all magic, and our two protagonists leave the world for the Undying Lands. We're told of the lives and deaths of all the other main characters. The world is finished. There are no stories left to tell.

At the end of WoT, it's just an ending. We're told of a great cycle to the universe, but half of it remains a mystery. Our protagonists are barely into their twenties, and the world has just been turned upside down. What will happen next? Anything, everything.

I don't mind the Zones of Thought universe being left open-ended, even if I would have preferred a little more. It's the sort of universe that shouldn't be wrapped up completely.


Tolkien did start writing a sequel to LOTR set a generation or two later, but he abandoned it because it was coming out too grimdark for him.


Yes, I've been hoping for this; one amongst many reasons to be sad that Vinge didn't live longer.


Was he in the process of wrapping everything up?



The read-first file says

> In this form, it is possible to read the story without being bothered by the comments -- yet be able to see the comments on demand. (Because of production deadlines I have not seen the exact user interface for the Clarinet edition, and so some of this discussion may be slightly inconsistent with details of the final product.)

Did the final product not hold up, or is the page not presenting it right?


The page is not presenting right, I think. This HN comment from a few years ago [1] points to a script that purports to munge the files into modern HTML, but I have not tried it myself.

[1] https://news.ycombinator.com/item?id=24876236


That's interesting but I found it it incredibly difficult to read/parse through. I've read A Fire Upon the Deep many times (the whole trilogy) but the comment syntax is not easy for me to follow at all. There are snippets that make a little sense but I don't think I could read this as-is.


A Fire Upon the Deep is one of the all-time great science fiction novels.


Wow! This is the internet find of the week for me. How long until this appears as its own post on the HN front page? Thanks for mirroring.


I got this on CD-ROM back in the 90's. It was really fun looking through stuff.


Yes, the Hugo-Nebula 1993 CD-ROM. That included some of the earliest (some say the earliest) examples of ebooks based on current fiction (rather than on out-of-copyright classic books). I have it myself still somewhere.


Yes, that was it! I haven't seen mine for a couple of decades. Too aggressive purging of stuff, so I think I won't see it ever again.


He coined the concept 'singularity' in the sense of machines becoming smarter than humans what a time for him to die with all the advancements we're seeing in artificial intelligence. I wonder what he thought about it all.

>The concept and the term "singularity" were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.

Looks like he was spot on.


Just to clarify, the “singularity” conjectures a slightly different and more interesting phenomenon, one driven by technological advances, true, but its definition was not those advances.

It was more the second derivative of future shock: technologies and culture that enabled and encouraged faster and faster change until the curve bent essentially vertical…asymptotimg to a mathematical singularity.

An example my he spoke of was that, close to the singularity, someone might found a corporation, develop a technology, make a profit from it, and then have it be obsolete by noon.

And because you can’t see the shape of the curve on the other side of such a singularity, people living on the other side of it would be incomprehensible to people on this side.

Ray Lafferty’s 1965 story “Slow Tuesday Night” explored this phenomenon years before Toffler wrote “Future Shock”


Note that the "Singularity" turns up in the novel

https://en.wikipedia.org/wiki/Marooned_in_Realtime

where people can use a "Bobble" to freeze themselves in a stasis field and travel in time... forward. The singularity is some mysterious event that causes all of unbobbled humanity to disappear leaving the survivors wondering, even 10s of millions of years later, what happened. As such it is one of the best pretenses ever in sci-fi. (I am left wondering though if the best cultural comparison is "The Rapture" some Christians believe in making this more of a religiously motivated concept as opposed to sound futurism.)

I've long been fascinated by this differential equation

  dx
  -- = x^2
  dt
which has solutions that look like

  x = 1/(t₀-t)
which notably blows up at time t₀. It's a model of an "intelligence explosion" where improving technology speeds up the rate of technological process but the very low growth when t ≪ t₀ could also be a model for why it is hard to bootstrap a two-sided market, why some settlements fail, etc. About 20 years ago I was very interested in ecological accounting and wondering if we could outrace resource depletion and related problems and did a literature search for people developing models like this further and was pretty disappointed not to find much also it did appear as a footnote in the ecology literature here and there. Even papers like

https://agi-conf.org/2010/wp-content/uploads/2009/06/agi10si...

seem to miss it. (Surprised the lesswrong folks haven't picked it up but they don't seem too mathematically inclined)

---

Note I don't believe in the intelligence explosion because what we've seen in "Moore's law" recently is that each generation of chips is getting much more difficult and expensive to develop whereas the benefits of shrinks are shrinking and in fact we might be rudely surprised that the state of the art chips of the new future (and possibly 2024) burn up pretty quickly. It's not so clear that chipmakers would have continued to invest in a new generation if governments weren't piling huge money into a "great powers" competition... That is, already we might be past the point of economic returns.


IMHO Marooned in Realtime is the best Vinge book. Besides being a dual mystery novel, it really explores the implications of bobble technology and how just a few hours of technology development near the singularity can be extreme.


Yep. I like it better than Fire Upon the Deep but I do like both of them. I didn’t like A Deepness in the Sky as it was feeling kinda grindy like Dune. (I wish we could just erase Dune so people could enjoy all of Frank Herbert’s other novels of which I love even the bad ones)


The first time I read A Deepness In The Sky, I was a bit annoyed, because I was excited for the A plot to progress, and it felt like we were spending an awful lot of time on B & C.

On a second read, when I knew where the story was going and didn't need the frisson of resolution, I enjoyed it much more. It's good B & C plot, and it all does tie in. But arguably the pacing is off.


Can you recommend a non-Dune Herbert book? I recall seeing Dosadi when I was a kid in the sci fi section of the library and just never picked it up. I generally like hard sci-fi and my main issue with Dune was that it went off into the weeds too many times.


I like the Dosadi books, Whipping Star, the short stories in Eye, Eyes of Heisenberg, Destination: Void, The Santaroga Barrier (which my wife hates), Under Pressure and Hellstrom's Hive. If I had to pick just one it might be Whipping Star but maybe Under Pressure is the hardest sci-fi.


Whipping Star has some amazing alien vs human discourse (at least, that's my memory from ~20 years ago!). It was the first time I found alien dialog that didn't sound like repackaged English.


I loved 'The Jesus Incident' [0], which he co-authored with Bill Ransom - when I read it as a teenager in the 80s, it felt so 'adult' compared to a lot of the other science fiction I had read to that point.

I later read the prequel and did not like it. I never read the third book in the trilogy.

[0] https://en.wikipedia.org/wiki/The_Jesus_Incident


A quirky (but nonetheless good) novel is "The Green Brain". It has some of Dune's ecological sensibility, but is otherwise completely different.


I _hated_ The Green Brain, but that was mostly because he had all the characters say everything in Portuguese, then repeat themselves in English. It was as if there was an echo in the room.


I'm also a bit sceptical of an intelligence explosion but compute per dollar has increased in a steady exponential way long before Moore's law and will probably continue after it. There are ways to progress other than shrinking transistors.


Even though we understand a lot more about how LLMs work and have cut resource consumption dramatically in the last year we still know hardly anything so it seems quite likely there is a better way to do it.

For one thing dense vectors for language seem kinda insane to me. Change one pixel in a picture and it makes no difference to the meaning. Change one letter in a sentence and you can change the meaning completely so a continuous representation seems fundamentally wrong.


I get the impression human brains process things a lot more efficently so there's probably a way to go there.


Well, they do manage to get by on about 20 W.


from http://extropians.weidai.com/extropians.3Q97/4356.html

The bobble is a speculative technology that originated in Vernor Vinge's science fiction. It allows spherical volumes to be enclosed in complete stasis for controllable periods of time. It was used in _The Peace War_ as a weapon, and in _Marooned in Realtime_ as a way for humans to tunnel through the Singularity unchanged.

As far as I know, the bobble is physically impossible. However it may be possible to simulate its effects with other technologies. Here I am especially interested in the possibility of tunneling through the Singularity.

Why would anyone want to do that, you ask? Some people may have long term goals that might be disrupted by the Singularity, for example maintaining Danny Hillis's clock or keeping a record of humanity. Others may want to do it if the Singularity is approaching in an unacceptable manner and they are powerless to stop or alter it. For example an anarchist may want to escape a Singularity that is dominated by a single consciousness. A pacifist may want to escape a Singularity that is highly adversarial. Perhaps just the possibility of tunneling through the Singularity can ease people's fears about advanced technology in general.

Singularity tunneling seems to require a technology that can defend its comparatively powerless users against extremely, perhaps even unimaginably, powerful adversaries. The bobble of course is one such technology, but it is not practical. The only realistic technology that I am aware of that is even close to meeting this requirement is cryptography. In particular, given some complexity theoretic assumptions it is possible to achieve exponential security in certain restricted security models. Unfortunately these security models are not suitable for my purpose. While adversaries are allowed to have computational power that is exponential in the amount of computational power of the users, they can only interact with the users in very restricted ways, such as reading or modifying the messages they send to each other. It is unclear how to use cryptography to protect the users themselves instead of just their messages. Perhaps some sort of encrypted computation can hide their thought processes and internal states from passive monitors. But how does one protect against active physical attacks?

The reason I bring up cryptography, however, is to show that it IS possible to defend against adversaries with enormous resources at comparatively little cost, at least in certain situations. The Singularity tunneling problem should not be dismissed out of hand as being unsolvable, but rather deserves to be studied seriously. There is a very realistic chance that the Singularity may turn out to be undesirable to many of us. Perhaps it will be unstable and destroy all closely-coupled intelligence. Or maybe the only entity that emerges from it will have the "personality" of the Blight. It is important to be able to try again if the first Singularity turns out badly.

and: http://lesswrong.com/lw/jgz/aalwa_ask_any_lesswronger_anythi...

"I do have some early role models. I recall wanting to be a real-life version of the fictional "Sandor Arbitration Intelligence at the Zoo" (from Vernor Vinge's novel A Fire Upon the Deep) who in the story is known for consistently writing the clearest and most insightful posts on the Net. And then there was Hal Finney who probably came closest to an actual real-life version of Sandor at the Zoo, and Tim May who besides inspiring me with his vision of cryptoanarchy was also a role model for doing early retirement from the tech industry and working on his own interests/causes."


A few people have pointed out that Sandor at the Zoo was more likely a reference to someone else, of course: ""The Zoo" etc. was a reference to Henry Spencer, who was known on Usenet for his especially clear posts. He posted from utzoo (University of Toronto Zoology.)"


This is a little concerning because it means there might be references to Archimedes Plutonium, Kibo and Elf Sternberg in there I might've missed.


Here's a link to the full text of _Slow Tuesday Night_: https://web.archive.org/web/20060719184509/www.scifi.com/sci...


with respect, we don’t know if he was spot on. Companies shoehorning language models into their products is a far cry from the transformative societal change he describes will happen. nothing like a singularity has yet happened at the scale he describes, and might not happen without more fundamental shifts/breakthroughs in AI research.


What we're seeing right now with LLMs is like music in the late 30s after the invention of the electric guitar. At that point people still have no idea how to use it so, so they were treating it like an amplified acoustic guitar. It took almost 40 years for people to come up with the idea of harnessing feedback and distortion to use the guitar to create otherworldly soundscapes, and another 30 beyond that before people even approached the limit of guitar's range with pedals and such.

LLMs are a game changer that are going to enable a new programming paradigm as models get faster and better at producing structured output. There are entire classes of app that couldn't exist before because there there was a non-trivial "fuzzy" language problem in the loop. Furthermore I don't think people have a conception of how good these models are going to get within 5-10 years.


> Furthermore I don't think people have a conception of how good these models are going to get within 5-10 years.

Pretty sure it's quite the opposite of what you're implying: People see those LLMs who closely resemble actual intelligence on the surface, but have some shortcomings. Now they extrapolate this and think it's just a small step to perfection and/or AGI, which is completely wrong.

One problem is that converging to an ideal is obviously non-linear, so getting the first 90% right is relatively easy, and closer to 100% it gets exponentially harder. Another problem is that LLMs are not really designed in a way to contain actual intelligence in the way humans would expect them to, so any apparent reasoning is very superficial as it's just language-based and statistical.

In a similar spirit, science fiction stories playing in the near future often tend to have spectacular technology, like flying personal cars, in-eye displays, beam travel, or mind reading devices. In the 1960s it was predicted for the 80s, in the 80s it was predicted for the 2000s etc.


This book

https://www.amazon.com/Friends-High-Places-W-Livingston/dp/0...

tells (among other things) a harrowing tale of a common mistake in technology development that blindsides people every time: the project that reaches an asymptote instead of completion that can get you to keep spending resources and spending resources because you think you have only 5% to go except the approach you've chosen means you'll never get the last 4%. It's a seductive situation that tends to turn the team away from Cassandras who have a clear view.

Happens a lot in machine learning projects where you don’t have the right features. (Right now I am chewing on the problem of “what kind of shoes is the person in this picture wearing?” and how many image classification models would not at all get that they are supposed to look at a small part of the image and how easy it would be to conclude that “this person is on a basketball court so they are wearing sneakers” or “this is a dude so they aren’t wearing heels” or “this lady has a fancy updo and fancy makeup so she must be wearing fancy shoes”. Trouble is all those biases make the model perform better up to a point but to get past that point you really need to segment out the person’s feet.)


You are looking at things like the failure of full self driving due to massive long tail complexity, and extrapolating that to LLMs. The difference is that full self driving isn't viable unless it's near perfect, whereas LLMs and text to image models are very useful even when imperfect. In any field there is a sigmoidal progress curve where things seem to move slowly at first when getting set up, accelerate quickly once a framework is in place, then start to run out of low hanging fruit and have to start working hard for incremental progress, until the field is basically mined out. Given the rate that we're seeing new stuff come out related to LLMs and image/video models, I think it's safe to say we're still in the low hanging fruit stage. We might not achieve better than human performance or AGI across a variety of fields right away, but we'll build a lot of very powerful tools that will accelerate our technological progress in the near term, and those goals are closer than many would like to admit.


AGI (human level intelligence) is not an really an end goal but a point that will be surpassed. So, by looking at it as something asymptotically approaching an ideal 100% is fundamentally wrong. That 100% mark is going to be in the rear view mirror at some point. And it's a bit of an arbitrary mark as well.

Of course it doesn't help that people are a bit hand wavy about what that mark exactly is to begin with. We're very good at moving the goal posts. So that 100% mark has the problem that it's poorly defined and in any case just a brief moment in time given exponential improvements in capabilities. In the eyes of most we're not quite there yet for whatever there is. I would agree with that.

At some point we'll be debating whether we are actually there, and then things move on from there. A lot of that debate is going to be a bit emotional and irrational of course. People are very sensitive about these things and they get a bit defensive when you portray them as clearly inferior to something else. Arguably, most people I deal with don't actually know a lot, their reasoning is primitive/irrational, and if you'd benchmark them against an LLM it wouldn't be that great. Or that fair.

The singularity is kind of the point where most of the improvements to AI are going to come from ideas and suggestions generated by AI rather than by humans. Whether that's this decade or the next is a bit hard to predict obviously.

Human brains are quite complicated but there's only a finite number of neurons in there; a bit under 100 billion. We can waffle a bit about the complexity of their connections. But at some point it becomes a simple matter of throwing more hardware at the problem. With LLMs pushing tens-hundreds of parameters already, you could legitimately ask what a few more doublings in numbers here enable.


I think you're falling for the exact same fallacy that I was describing. Also note that the human level of intelligence is not arbitrary at all: Most LLMs are trained on human-generated data, and since they are statistical models, they won't suddenly come up with truly novel reasoning. They're generally just faster at generating stuff than humans, because they're computers.


>But at some point it becomes a simple matter of throwing more hardware at the problem.

Insofar as it's simple to throw like six orders of magnitude more hardware at something that has already had a lot of hardware thrown at it.


In 5 to 10 years we will have likely moved on to the next big model architecture just like it was all about convolutional networks 5 to 10 years ago despite the pivotal paper being published in 2017.


Singularity doesn't necessarily rely on LLMs by any means. It's just that communication is improving and the number of people doing research is increasing. Weak AI is icing on top, let alone LLMs, which are being shoe-horned into everything now. VV clearly adds these two other paths:

            o Computer/human interfaces may become so intimate that users
              may reasonably be considered superhumanly intelligent.
            o Biological science may find ways to improve upon the natural
              human intellect.
https://edoras.sdsu.edu/~vinge/misc/singularity.html


Yeah this is the angle I look at the most, the Humans+Internet combo.

I don't believe LLMs will really get us much of anywhere, Singularity-wise. They're just ridiculously inefficient in terms of compute (and thus power) needs to even do the basic pattern-prediction they do today. They're neat tools for human augmentation in some cases, but that's about all they contribute.

I think, even prior to the recent explosion of LLM stuff, that the aggregate of Humans and the depth of their interconnections on the Internet is already starting to form at least the beginnings of a sort of Singularity, without any AI-related topics needing to be introduced. The way memes (real memes, not silly jokes) spread around the Internet and shape thoughts across all the users, the way the users bounce ideas off each other and refine them, the way viral advocacy and information sharing works, etc. Basically the Singularity is just going to be the emergent group consciousness and capabilities of the collective Internet-connected set of Humans.


> Within thirty years, we will have the technological means to create superhuman intelligence.

Blackwell.

> o Develop human/computer symbiosis in art: Combine the graphic generation capability of modern machines and the esthetic sensibility of humans. Of course, there has been an enormous amount of research in designing computer aids for artists, as labor saving tools. I'm suggesting that we explicitly aim for a greater merging of competence, that we explicitly recognize the cooperative approach that is possible. Karl Sims [22] has done wonderful work in this direction.

Stable Diffusion.

> o Develop interfaces that allow computer and network access without requiring the human to be tied to one spot, sitting in front of a computer. (This is an aspect of IA that fits so well with known economic advantages that lots of effort is already being spent on it.)

iPhone and Android.

> o Develop more symmetrical decision support systems. A popular research/product area in recent years has been decision support systems. This is a form of IA, but may be too focussed on systems that are oracular. As much as the program giving the user information, there must be the idea of the user giving the program guidance.

Cicero.

> Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace.

Trump.

> o Use local area nets to make human teams that really work (ie, are more effective than their component members). This is generally the area of "groupware", already a very popular commercial pursuit. The change in viewpoint here would be to regard the group activity as a combination organism. In one sense, this suggestion might be regarded as the goal of inventing a "Rules of Order" for such combination operations. For instance, group focus might be more easily maintained than in classical meetings. Expertise of individual human members could be isolated from ego issues such that the contribution of different members is focussed on the team project. And of course shared data bases could be used much more conveniently than in conventional committee operations. (Note that this suggestion is aimed at team operations rather than political meetings. In a political setting, the automation described above would simply enforce the power of the persons making the rules!)

Ingress.

> o Exploit the worldwide Internet as a combination human/machine tool. Of all the items on the list, progress in this is proceeding the fastest and may run us into the Singularity before anything else. The power and influence of even the present-day Internet is vastly underestimated. For instance, I think our contemporary computer systems would break under the weight of their own complexity if it weren't for the edge that the USENET "group mind" gives the system administration and support people!) The very anarchy of the worldwide net development is evidence of its potential. As connectivity and bandwidth and archive size and computer speed all increase, we are seeing something like Lynn Margulis' [14] vision of the biosphere as data processor recapitulated, but at a million times greater speed and with millions of humanly intelligent agents (ourselves).

Twitter.

> o Limb prosthetics is a topic of direct commercial applicability. Nerve to silicon transducers can be made [13]. This is an exciting, near-term step toward direct communcation.

Atom Limbs.

> o Similar direct links into brains may be feasible, if the bit rate is low: given human learning flexibility, the actual brain neuron targets might not have to be precisely selected. Even 100 bits per second would be of great use to stroke victims who would otherwise be confined to menu-driven interfaces.

Neuralink.

---

https://justine.lol/dox/singularity.txt


>> > Within thirty years, we will have the technological means to create superhuman intelligence.

> Blackwell.

I'm fucking sorry but there is no LLM or "AI" platform that is even real intelligence, today, easily demonstrated by the fact that an LLM cannot be used to create a better LLM. Go on, ask ChatGPT to output a novel model that performs better than any other model. Oh, it doesn't work? That's because IT'S NOT INTELLIGENT. And it's DEFINITELY not "superhuman intelligence." Not even close.

Sometimes accurately regurgitating facts is NOT intelligence. God it's so depressing to see commenters on this hell-site listing current-day tech as ANYTHING approaching AGI.


> Oh, it doesn't work? That's because IT'S NOT INTELLIGENT.

Ok, let's run this test of "real intelligence" on you. We eagerly await to see your model. Should be a piece of cake.


> an LLM cannot be used to create a better LLM

By that logic most humans are also not intelligent.


You didn't read him correctly; he's not saying Blackwell is AGI. I believe that he's saying that perhaps Blackwell could be computationally sufficient for AGI if "used correctly."

I don't know where that "computationally sufficient" line is. It'll always be fuzzy (because you could have a very slow, but smart entity). And before we have a working AGI, thinking about how much computation we need always comes down to back of the envelope estimations with radically different assumptions of how much computational work brains do.

But I can't rule out the idea that current architectures have enough processing to do it.


I don't use the A word, because it's one of those words that popular culture has poisoned with fear, anger, and magical thinking. I can at least respect Kurzweil though and he says the human brain has 10 petaflops. Blackwell has 20 petaflops. That would seem to make it capable of superhuman intelligence to me. Especially if we consider that it can focus purely on thinking and doesn't have to regulate a body. Imagine having your own video card that does ChatGPT but 40x smarter.


I think there's a big focus on petaflops and that it may have been a good measure to think about initially, but now we're missing the mark.

If a human brain does its magic with 10 petaflops, and you have 1 petaflop, you should be able to make an equivalent to the human brain that runs at 1/10th of the speed but never sleeps. In other words, once you've reached the same order of magnitude it doesn't matter.

On the other hand, Kurzweil's math really comes down to an argument that the brain is using about 10 petaflops for inference, but it also is changing weights and doing a lot more math and optimization for training (which we don't completely understand). It may (or may not) take considerably more than 10 petaflops to train at the rate humans learn. And remember, humans take years to do anything useful.

Further, 10 petaflops may be enough math, but it doesn't mean you can store enough information or flow enough state between the different parts "of the model."

These are the big questions. If we knew the answers, IMO, we would already have really slow AGI.


Yes I agree there's a lot of interesting problems to solve and things to learn when it comes to modeling intelligence. Vernor Vinge was smart in choosing the wording that we'd have the means to create superhuman intelligence by now, since no one's ever going to agree if we've actually achieved it.


Probably just a question of time constant / zoom on your time axis. When zoomed in up close, an exponential looks a lot like a bunch of piecewise linear components, where big breakthroughs just are a discontinuous changes in slope...


Still has 6 years to be proven correct.


Imagine the first llm to suggest an improvement to itself that no human has considered. Then imagine what happens next.


OK. I'm imagining a correlation engine that looks through code as a series of prompts that are used to generate more code from the corpus that is statistically likely to follow.

And now I'm transforming that through the concept of taking a photograph and applying the clone tool via a light airbrush.

Repeat enough times, and you get uncompilable mud.

LLMs are not going to generate improvements.


Saying they definitely won't or they definitely will are equally over-broad and premature.

I currently expect we'll need another architectural breakthrough; but also, back in 2009 I expected no-steering-wheel-included self driving cars no later than 2018, and that the LLM output we actually saw in 2023 would be the final problem to be solved in the path to AGI.

Prediction is hard, especially about the future.


GPT4 does inference at 560 teraflops. Human brain goes 10,000 teraflops. NVIDIA just unveiled their latest Blackwell chip yesterday which goes 20,000 teraflops. If you buy an NVL72 rack of the things, it goes 1,400,000 teraflops. That's what Jensen Huang's GPT runs on I bet.


> GPT4 does inference at 560 teraflops. Human brain goes 10,000 teraflops

AFAICT, both are guesses. The low-end estimate I've seen for human brains are ~ 162 GFLOPS[0] to 10^28 FLOPS[1]; even just the model size for GPT-4 isn't confirmed, merely a combination of human inference of public information with a rumour widely described as a "leak", likewise the compute requirements.

[0] https://geohot.github.io//blog/jekyll/update/2022/02/17/brai...

[1] https://aiimpacts.org/brain-performance-in-flops/


They're not guesses. We know they use A100s and we know how fast an A100 goes. You can cut a brain open and see how many neurons it has and how often they fire. Kurzweil's 10 petaflops for the brain (100e9 neurons * 1000 connections * 200 calculations) is a bit high for me honestly. I don't think connections count as flops. If a neuron only fires 5-50 times a second then that'd put the human brain at .5 to 5 teraflops it seems to me. That would explain why GPT is so much smarter and faster than people. The other estimates like 1e28 are measuring different things.


> They're not guesses. We know they use A100s and we know how fast an A100 goes.

And we don't know how many GPT-4 instances run on any single A100, or if it's the other way around and how many A100s are needed to run a single GPT-4 instance. We also don't know how many tokens/second any given instance produces, so multiple users may be (my guess is they are) queued on any given instance. We have a rough idea how many machines they have, but not how intensively they're being used.

> You can cut a brain open and see how many neurons it has and how often they fire. Kurzweil's 10 petaflops for the brain (100e9 neurons * 1000 connections * 200 calculations) is a bit high for me honestly. I don't think connections count as flops. If a neuron only fires 5-50 times a second then that'd put the human brain at .5 to 5 teraflops it seems to me.

You're double-counting. "If a neuron only fires 5-50 times a second" = maximum synapse firing rate * fraction of cells active at any given moment, and the 200 is what you get from assuming it could go at 1000/second (they can) but only 20% are active at any given moment (a bit on the high side, but not by much).

Total = neurons * synapses/neuron * maximum synapse firing rate * fraction of cells active at any given moment * operations per synapse firing

1e11 * 1e3 * 1e3 Hz * 10% (of your brain in use at any given moment, where the similarly phrased misconception comes from) * 1 floating point operation = 1e16/second = 10 PFLOP

It currently looks like we need more than 1 floating point operation to simulate a synapse firing.

> The other estimates like 1e28 are measuring different things.

Things which may turn out to be important for e.g. Hebbian learning. We don't know what we don't know. Our brains are much more sample-efficient than our ANNs.


Synapses might be akin to transistor count, which is only roughly correlated with FLOPs on modern architectures.

I've also heard in a recent talk that the optic nerve carries about 20 Mbps of visual information. If we imagine a saturated task such as the famous gorilla walking through the people passing around a basketball, then we can arrive at some limits on the conscious brain. This does not count the autonomic, sympathetic, and parasympathetic processes, of course, but those could in theory be fairly low bandwidth.

There is also the matter of the "slow" computation in the brain that happens through neurotransmitter release. It is analog and complex, but with a slow clock speed.

My hunch is that the brain is fairly low FLOPs but highly specialized, closer to an FPGA than a million GPUs running an LLM.


> I don't think connections count as flops. If a neuron only fires 5-50 times a second then that'd put the human brain at .5 to 5 teraflops it seems to me.

That assumes that you can represent all of the useful parts of the decision about whether to fire or not to fire in the equivalent of one floating point operation, which seems to be an optimistic assumption. It also assumes there's no useful information encoded into e.g. phase of firing.


Imagine that there's a little computer inside each neuron that decides when it needs to do work. Those computers are an implementation detail of the flops being provided by neurons, and would not increase the overall flop count, since that'd be counting them twice. For example, how would you measure the speed of a game boy emulator? Would you take into consideration all the instructions the emulator itself needs to run in order to simulate the game boy instructions?


Already considered in my comment.

> Imagine that there's a little computer inside each neuron that decides when it needs to do work

Yah, there's -bajillions- of floating point operation equivalents happening in a neuron deciding what to do. They're probably not all functional.

BUT, that's why I said the "useful parts" of the decision:

It may take more than the equivalent of one floating point operation to decide whether to fire. For instance, if you are weighting multiple inputs to the neuron differently to decide whether to fire now, that would require multiple multiplications of those inputs. If you consider whether you have fired recently, that's more work too.

Neurons do all of these things, and more, and these things are known to be functional-- not mere implementation details. A computer cannot make an equivalent choice in one floating point operation.

Of course, this doesn't mean that the brain is optimal-- perhaps you can do far less work. But if we're going to use it as a model to estimate scale, we have to consider what actual equivalent work is.


I see. Do you think this is what Kurzweil was accounting for when he multiplied by 1000 connections?


Yes, but it probably doesn't tell the whole story.

There's basically a few axes you can view this on:

- Number of connections and complexity of connection structure: how much information is encoded about how to do the calculations.

- Mutability of those connections: these things are growing and changing -while doing the math on whether to fire-.

- How much calculation is really needed to do the computation encoded in the connection structure.

Basically, brains are doing a whole lot of math and working on a dense structure of information, but not very precisely because they're made out of meat. There's almost certainly different tradeoffs in how you'd build the system based on the precision, speed, energy, and storage that you have to work with.


That's is based on old assumption of neuron function.

Firstly, Kurzweil underestimates the number connections by order of magnitude.

Secondly, dentritic computation changes things. Individual dentrites and the dendritic tree as a whole can do multiple individual computations. logical operations low-pass filtering, coincidence detection, ... One neuronal activation is potentially thousands of operations per neuron.

Single human neuron can be equivalent of thousands of ANN's.


They might generate improvements, but I’m not sure why people think those improvements would be unbounded. Think of it like improvements to jet engines or internal combustion engines - rapid improvements followed by decades of very tiny improvements. We’ve gone from 32-bit LLM weights down to 16, then 8, then 4 bit weights, and then a lot of messy diminishing returns below that. Moore’s is running on fumes for process improvements, so each new generation of chips that’s twice as fast manages to get there by nearly doubling the silicon area and nearly doubling the power consumption. There’s a lot of active research into pruning models down now, but mostly better models == bigger models, which is also hitting all kinds of practical limits. Really good engineering might get to the same endpoint a little faster than mediocre engineering, but they’ll both probably wind up at the same point eventually. A super smart LLM isn’t going to make sub-atomic transistors, or sub-bit weights, or eliminate power and cooling constraints, or eliminate any of the dozen other things that eventually limit you.


Saying that AI hardware is near a dead end because Moore's law is running out of steam is silly. Even GPUs are very general purpose, we can make a lot of progress in the hardware space via extreme specialization, approximate computing and analog computing.


I'm mostly saying that unless a chip-designing AI model is an actual magical wizard, it's not going to have a lot of advantage over teams of even mediocre human engineers. All of the stuff you're talking about is Moore's Law limited after 1-2 generations of wacky architectural improvements.


Bro, Jensen Huang just unveiled a chip yesterday that goes 20 petaflops. Intel's latest raptorlake cpu goes 800 gigaflops. Can you really explain 25000x progress by the 2x larger die size? I'm sure reactionary America wanted Moore's law to run out of steam but the Taiwanese betrayal made up for all the lost Moore's law progress and then some.


That speedup compared to Nvidia's previous generation came nearly entirely from: 1) a small process technology improvement from TSMC, 2) more silicon area, 3) more power consumption, and 4) moving to FP4 from FP8 (halving the precision). They aren't delivering the 'free lunch' between generations that we had for decades in terms of "the same operations faster and using less power." They're delivering increasingly exotic chips for increasingly crazy amounts of money.


Pro tip: If you want to know who is the king of AI chips, compare FLOPS (or TOPS) per chip area, not FLOPS/chip.

As long as the bottleneck is the fab capacity as wafers per hous, the number of operations per second per chip area determines who will produce more compute with best price. It's a good measure even between different technology nodes and superchips.

Nvidia is leader for a reason.

If manufacturing capacity increases to match the demand in the future, FLOPS or TOPS per Watt may become relevant, but now it's fab capacity.


Taiwanese betrayal? I’m not sure I understand the reference.


There's no reference. It's just a bad joke. What they did was actually very good.


LLMs are so much more than you are assuming… text, images, code are merely abstractions to represent reality. Accurate prediction requires no less than usefully generalizable models and deep understanding of the actual processes in the world that produced those representations.

I know they can provide creative new solutions to totally novel problems from firsthand experience… instead of assuming what they should be able to do, I experimented to see what they can actually do.

Focusing on the simple mechanics of training and prediction is to miss the forest for the trees. It’s as absurd as saying how can living things have any intelligence? They’re just bags of chemicals oxidizing carbon. True but irrelevant- it misses the deeper fact that solving almost any problem deeply requires understanding and modeling all of the connected problems, and so on, until you’ve pretty much encompassed everything.

Ultimately it doesn’t even matter what problem you’re training for- all predictive systems will converge on general intelligence as you keep improving predictive accuracy.


LLM != AI.

An LLM is not going to suggest a reasonable improvement to itself, except by sheerest luck.

But then next generation, where the LLM is just the language comprehension and generation model that feeds into something else yet to be invented, I have no guarantees about whether that will be able to improve itself. Depends on what it is.


Yes, eventually one gets a series of software improvements which eventually result in the best possible performance on currently available hardware --- if one can consistently get an LLM to suggest improvements to itself.

Until we get to a point where an AI has the wherewithal to create a fab to make its own chips and then do assembly w/o human intervention (something along the lines of Steve Jobs vision of a computer factory where sand goes in at one end and finished product rolls out the other) it doesn't seem likely to amount to much.


That may happen more easily than you're suggesting. LLMs are masters at generating plausible sounding ideas with no regard to their factual underpinnings. So some of those computational bong hits might come up with dozens of plausible looking suggestions (maybe featuring made up literature references as well).

It would be left to human researchers to investigate them and find out if any work. If they succeed, the LLM will get all the credit for the idea, if they fail, it's them who will have wasted their time.


It has, anyway, already had a profound effect on the IT job market.


He popularized and advanced the concept, but originally it was by von Neumann.


The concept predates von Neuman.

First known person to present the idea was mathematician and philosopher Nicolas de Condorcet in the late 1700s. Not surprising, because he also laid out most ideals and values of modern liberal democracy as they are now. Amazing philosopher.

He basically invented the idea of ensemble learning (known as boosting in machine learning).

Nicolas de Condorcet and the First Intelligence Explosion Hypothesis https://onlinelibrary.wiley.com/doi/10.1609/aimag.v40i1.2855


That essay is written by a political scientist. His arguments aren't very persuasive. Even if they were, he doesn't actually cite the person he's writing about, so I have no way to check the primary materials. It's not like this is uncommon either. Everyone who's smart since 1760 has extrapolated the industrial revolution and imagined something similar to the singularity. Malthus would be a bad example and Nietzsche would be a good example. But John von Neumann was a million times smarter than all of them, he named it the singularity, and that's why he gets the credit.


Check out "Sketch for a Historical Picture of the Progress of the Human Mind", by Marquis de Condorcet, 1794. The last chapter, The Tenth epoch/The future progress of the human mind. There he lays out unlimited advance of knowledge, unlimited lifespan for humans, improvement of physical faculties, and then finally improvement of the intellectual and moral faculties.

And this was not some obscure author, but leading figure in the French Enlightenment. Thomas Malthus wrote his essay on population as counterargument.


There are some quotes but they guy seems to be talking about improving humans rather than anything AI like:

"...natural [human] faculties themselves and this [human body] organisation could also be improved?"


That kind of niche knowledge is what I come to HN for!


Also "Darwin among the Machines"[0] written by Samuel Butler in 1863, that's 4 years after Darwin's "On the Origin of Species".

Butlerian jihad[1] is the war against machines in the Dune universe.

[0] https://en.wikipedia.org/wiki/Darwin_among_the_Machines

[1] https://dune.fandom.com/wiki/Butlerian_Jihad


Butler also expanded this idea in his 1872 novel Erewhon, where he described a seemingly primitive island civilization that turned out to once had greater technology than the West, including mechanical AI, but they abandoned it when they began to fear its consequences. A lot of 20th century SF tropes in the Victorian period.

https://en.wikipedia.org/wiki/Erewhon


> He wrote that he would be surprised if it occurred before 2005 or after 2030.

Being surprised is also an exciting outcome. Was he thinking about that too?


"We should destroy all AI out there before its taking over"


Oh man, this makes me sad.

I remember reading A Fire Upon the Deep based on a Usenet recommendation, and then immediately wanting to read everything else he wrote. A Deepness in the Sky is a worthy sequel.

He wasn’t prolific, but what he wrote was gold. He had a Tolkienesque ability to build world depth not by lengthy exposition, but by expert omission.

A true name in sci-fi.


'true name'?

https://en.wikipedia.org/wiki/True_Names

THE cyberpunk book.

Also, his later books are great but the "Across Realtime" trilogy has a special place in my heart. https://www.goodreads.com/en/book/show/167844


I’m pretty sure Yudkowsky read true names and it’s what caused him to focus his life on the alignment problem.

That novella is basically an illustrated warning of misaligned super intelligence (it’s also really good!)


I still want augmented Chess to be a sport. You get a computer not weighing more than X pounds.


Let us not neglect The Peace War and Across Realtime. The former introduced memorable tragic figures, besides its singular vision.


Just finished this series literally an hour ago. Very fun reads! The second book was quite the page turner.


VV is up there with Stephenson and Gibson as the top 3. I don't put Asimov, etc in there since Asimov was hard sci-fi to the max and couldn't write a character to save his life, much like later Stephenson.

I wish I could find something else like VV's work that's sort of under-the-radar. I do have to mention that things like The Three Body Problem get hype, but are several tiers below VVs work.


Vinge is certainly one of the greats but so is David Brin. I would not consider him under the radar though. Some of his best are Earth, The Heart of The Comet, Glory Season.


"The Heart of the Comet" was co-authored with Gregory Benford. It is one of my favorite books, and I wish they would collaborate again.

Incidentally, Brin and Benford along with Greg Bear, are collectively known as the "Killer Bs". Practically everything written by any of the three is likely to be a great read.


I don't know Brin at all, my first thought was "Sergey?!" - will check out his books and appreciate the recommendation.


Brin has a post on his FB wall mourning Vinge.


I believe they were friends. Brin mentioned that he hung out with Vinge a few weeks ago.


not quite the same, but Iain M. Banks is in my top 5, along with Vernor Vinge.


I'd put Banks in the Asimov/Stephenson tier at best. His ideas were brilliant enough to sustain a long series - but not one book in it actually makes for a good read. Neal Asher doesn't have anything like the same ambition, but Hilldiggers is a better Culture novel than any of the actual Culture novels if you actually want to read and enjoy it.


I am glad someone else has the same view point: Asimov and Stephenson are great hard sci-fi, but they can't write characters. Except Stephenson COULD, and did in Snow Crash and Diamong Age to some degree, but then stopped being interested in it. While VV had good characters in his books, even though I would argue he wasn't "excellent" at it either, and maybe Stephenson even surpassed VV in characters/story in Snow Crash.

I can't really judge Asimov - so many people stole from him that he just reads like War & Peace to me in the sense that it was probably awesome and novel at some point, but at this point it's just a little stale, even though Foundation was still cool...at least the first few books.

With that said, I don't put VV above Stephenson. Maybe above Gibson because I really appreciate the technical details, but not quite sure. To me VV, Stephenson, and Gibson are all the absolute top tier, at least in the sci fi realm. No one even comes close, as cool as some one offs by other authors, like Forever War, are.


I've so far only read Consider Phlebas, and while it's an interesting dystopia where humans have no purpose, and machines could do everything (if allowed), it's not an actual interesting story.

It's like he had the idea for the dystopia, with the main character fighting against the machines. And then tried to write a story around it. The central idea is interesting I guess, but the story built around it is not.

It doesn't help that the fight is futile, the machines as described are so powerful humanity doesn't stand a chance, so what kind of story can you make?

Maybe the later books are better.


it's an interesting dystopia where humans have no purpose

Unfortunately every single one of his books has this problem. It's like all of his novels are written from the perspective of the protagonist's housecat.


Bizarrely, there's a second sequel to A Fire Upon The Deep, but it's never been digitised.


Children Of The Sky is certainly available digitally.


It's a bad book, nowhere close to the first two in any regard.


Well, the characters are stuck on a primitive planet in the Slow Zone so if you go in expecting Space Opera then you’ll be disappointed. If you go in with a more open mind then you may find that there’s actually an interesting philosophical point to be examined and a decent story built around it.


Well, the characters are stuck on a primitive planet in the Slow Zone so if you go in expecting Space Opera then you’ll be disappointed.

Except half of Fire Upon the Deep was characters on the same planet but it was actually cool. The first two books are definitely among my favorite sci-fi of all time, the third one was a dud.

My main gripe is that these three books all share the same trope that underpins one of the major subplots: glib, charming politician type is scheming, eeeeevil. In the first two books, there's enough novelty (how the Tines and Spiders work, programming as archaeology, localizer mania) to make up for that. But I don't really think the third book adds much in the same way, and it is also very clearly building to a confrontation that will happen in a future book. So the staleness is much more noticeable


Does anyone know if he got started on another book in the series?


Yes, I recall that he started writing one about the invasion of the Emergency, but it was too depressing and he abandoned it.


It's not bad. It looks to be what would have become the first of a trilogy. It's just slow and sets the stage for something that culminates in another Fire Upon the Deep tier finale.


People wanted to read how Pham Nuwen defeated the Emergents, learn more about the Zones of Thought or see the Blight finally destroyed once and for all.

No one wanted more Game of Dogs.


Well, I wanted more "Game of Dogs".

But it turned out that I didn't want that much more.


I liked the dog stuff, although that might be because I read The Blabber before I got into the novels.


I'm stuck halfway through Deepness in the Sky, I should pick it up again.

Also stuck on book 8 of the Wheel of Time series, I was like 5 chapters in and didn't pick up a single thread I cared about from the previous book.

Agree about the expert omission part.


Deepness was well worth it.

Wheel of Time, on the other hand, I was very glad to give up on right around the same point as you.


I think WoT is worth pushing through, you got stuck in the same spot a number of people do. There is definitely a lull there.

Many times I've considered re-cutting the books/audio-books for WoT to remove what I find to a be drudgery but it would be a massive task that I'm not up to. I just skip over the parts in my re-reads of the series.

I'll be the first to say that WoT has /many/ flaws but it will forever hold a special place in my heart. You just have to get past the way women are written in the series (and I understand if you can't). That's something else I'd be happy to prune out or ideally fix but that's well beyond my skill set. Elaine and Egwene especially are horribly written in the last few books (and it's not all Brandon Sanderson's fault I assume, they aren't great in the prior books either).


Book eight may have been about when I gave up and sold the set. The dull bit in the middle of each book is when I would practice my speed reading.

About once a season I contemplate the idea of approaching either Sanderson or the Jordan estate and ask that they consider an abridged edition edited by Sanderson. You could easily knock 1500 pages out of this series and not change a single thing.

Meanwhile Rosamund Pike is doing new audio books for the series and the samples sound much better than the old one. But the first one is about forty hours. As much as I might like to claim that I would listen to her read a phone book, I don’t think I can listen to 600 hours of audiobooks for one series.


Yes, the self indulgence reached record heights by timing death before completion. Just so we could have infinite fields of boring politics.

In the paperbacks, each chapter with the “bad guys” had a special symbol on it, so I just skipped through the middle books reading only those. Fortunately the Dark One was still trying to progress the plot, and I don’t think I missed much.


Aah! The infamous ‘slog’ for the WOT series. Book 10 is where things pick up again my opinion.


One of the true greats.

True Names is a better cyberpunk story than anything Gibson or Neal Stephenson wrote.

Everyone mentions A Fire Upon the Deep and A Deepness in the Sky which are some of the best sci fi ever written, but I think The Peace War is way underrate too (although it was nominated for a Hugo award which it lost to Neuromancer).

RIP


A Fire Upon the Deep and A Deepness in the Sky are the books that opened my eyes to the utter incomprehensibility and weirdness of what intelligent alien life would really be like if it's out there.

I also credit the Transcend as being the first plausible, secular explanation for "gods" that I ever came across back in my militant atheist days.

These stories will be with me until I am gone, too. Thank you, Vernor. RIP.


The Bobbler was a strange idea. It made for a fun concept. I think there was more than one story in that world, if I'm remembering correctly.

Rainbow's End was very good!


> I think there was more than one story in that world

Two separate novels: 'The Peace War' and 'Marooned in Realtime', sold collectively as 'Across Realtime'. Enjoyed them both a lot, but for me 'Marooned...' had a more emotional punch, especially as it becomes clearer what had happened to the victim.

There is also a short story 'The Ungoverned' whose main character is Wil W. Brierson, the protagonist in 'Marooned...'.

Overviews without plot spoilers: 'The Peace War' describes a near-future in which bobbles (apparently indestructible stasis fields where time stands still) are used by 'hacker' types to launch an insurrection against the state. 'Marooned...' is set in the far future of the same world, where bobbles are used to support one-way time travel further into the future, where the few remaining humans try to reconnect following the mysterious disappearance of 99.9% of humanity. Both are high-concept SF, but 'Marooned...' also has elements of police procedural where a low-tech detective (Brierson) shanghaied into his future has to solve the slow murder of a high-tech individual (someone from the far future, relative to him).


The shift of the use of bobbles from Peace War to Marooned in Realtime is _wild_. Fantastic stories, wildly creative, delightfully different.


Rainbows End was very prophetic and changed how I view technology.


He had the Bobbler and he also had the Babbler, the later being a short story involving the Tines ...


The Cookie Monster was one of the best short novella's I ever read, and its influence can be seen everywhere from Greg Egan's Permutation City to episodes of Black Mirror.

Edit: I got it backwards, Egan's book came out first.


Came here to post almost this exact sentiment. I am not sure why but in college I checked out the Peace War and I largely credit that book and a few others for getting me back into reading. A couple years ago I decided to order a first edition and it's sitting on my shelf. The deepness books were great as well. "Software archaeologists" was a fantastic concept, I felt like that today digger through excel VBA

Additionally I think he was the first sci Fi author I found who was a computer science practitioner/professor. This led me to discover other great authors like Greg Egan.


I had the privilege to interview Vernor back in 2011, and continued to have interactions with him on and off in the intervening years. He was, as others have said, just immeasurably kind and thoughtful. I'm sad that I'll not have the opportunity to speak with him again.


I had him as a CS teacher at SDSU for a class. I had no idea he was a sci-fi author when I started the class. Bought his books and was hooked.

He taught me how to implement OS thread context switching in 68000 assembly language. We also had a lab where we had to come up with a simple assembly function that executed slow or fast depending on whether it used the cache efficiently or not.

Great teacher and author, and a very nice guy in general.


I emailed him out of the blue and asked him to write more stories about Pham Nuwen. He replied and was really nice and we corresponded over a couple of emails.


Did you ever ask him what he thought of today AI advancement ?


This guy was one of the greats. A deepness in the sky (the sequel) is one of my favourite sci fi books of all time, and even better than Fire upon the deep imo.


A Deepness in the Sky was perhaps the first "hard sci-fi" novel I ever read (this was before I knew of Greg Egan). The concept of spiders and the onOff planet was just awe-inspiring.

While Egan's idea-density is off the charts, I found Deepness in the Sky to be the most complete and entertaining hard-scifi novel. It has a lot of novel science but ensures that the reader is never overwhelmed (Egan will have you overwhelmed within the first paragraph of the first page). Highly entertaining and interesting.

I wonder what Vinge thought of LLMs. If you've read the book, Vinge had literal human LMs in the novel to decode the Spider language. Maybe he just didn't anticipate that computers could do what they do today.

A huge loss indeed.


> Vinge had literal human LMs in the novel to decode the Spider language.

Could you elaborate on this? It's been a while since I read the novel. I remember the use of Focus to create obsessive problem-solvers, but not sure how it relates to generative models or LLMs.

Thinking about it, I'm not sure how useful LLMs can be for translating entirely new languages. As I understand it they rely on statistical correlations harvested from training data which would not include any existing translations by definition.


I do not recall the exact details but I remember that some of the focused individuals were kept in a grid or matrix of some sort. The aim of these grids were to translate the spider-talk and achieve some form of conversation with the spiders on the planet. It is also mentioned that the focused individuals have their own invented language with which they communicate to other focused individuals, which is faster and more efficient than human languages.

I may be misremembering certain details, but the similarity to neural networks and their use in machine translation was quite apparent.


The zipheads were crippled with a weaponized virus that turned them all into autistic savants. The virus was somewhat magnetic, and using MRI like technologies, they could target specific parts of the brain to be affected to lesser or greater degrees. It's been awhile since I've re-read it, but "focused" was the propaganda label for it from the monstrous tyrannical regime that used it to turn people into zombies, no?


Not zombies, but loving slaves. People able to apply all of their creativity and problem–solving skills to any task given to them, but without much capacity for reflection or any kind of personal ambitions or desires.


Or ability to remember to feed, clean, or toilet themselves.

It’s that hyper-focus that I suspect many of us have experienced, but without agency and permanent. Worse than slavery.


Lack of agency is right. No ability to request medical aid even when suffering crippling pain from a burst appendix.


Yes, they could target specific portions of the brain. Have to re-read the book!


> If you've read the book, Vinge had literal human LMs in the novel to decode the Spider language. Maybe he just didn't anticipate that computers could do what they do today.

I mean, I don't think LLMs have been notably useful in decoding unknown languages, have they?


All currently-unknown real languages that an LLM might decode are languages that are unknown because of a lack of data, due the civilization being dead. An LLM won't necessarily be able to overcome that.

In the book the characters had access to effectively unbounded input since it was a live civilization generating the data, plus they had reference to at least some video, and... something else that would be very useful for decoding language but would constitute probably a medium-grade spoiler if I shared, so there's another relevant difference.

Still, it should also be said it wasn't literally LLMs, it was humans, merely, "affected" in a way that they are basically all idiot savants on the particular topic of language acquisition.


Oh, yeah; I'm just not convinced there's any particular reason to think that LLMs would be useful for decoding languages.

(That said it would be an interesting _experiment_, if a little hard to set up; you'd need a live language which hadn't made it into the LLM's training set at all, so you'd probably need to purpose-train an LLM...)


LLMs are.. not bad at finding some semantic relationships between some arbitrary data. Sure, if you dump an unknown language into LLM then you can only receive a semantically correct sentences of unknown meaning, but as you start to decode the language itself it would be way easier to find the relationships there, if not just outright replacing the terms with a translated ones.


No idea, though being next-token predictors, it can't hurt to use LLMs?


Thomas Nau is such a fantastic villain. Not evil for the sake of evil, but rather reasoned decisions with terrible prices.


> Not evil for the sake of evil, but rather reasoned decisions with terrible prices

The Emergents and their system are pretty clearly just evil, and there's never any indication given that they actually care about those terrible prices, or even reflect on them for long. Vinge is very good at channeling the Orwellian language that regimes like these use, but I didn't find his intent at all ambiguous.

The really compelling and ambiguous character in that book is [redacted spoiler], who really does grapple with the moral implications of his decisions, but ultimately chooses the not-evil path. Personally I think this also highlight's Vinge's biggest flaw as an author for me, which is that in all of his books, the most fully realized and believable protagonist is a scheming megalomaniac, with second place going to the abusive misanthrope of Rainbows End, and third to the prickly settlement leader in Marooned in Realtime. All of the more sympathetic characters feel like empty vessels that just react to the plot.


I think Greg Egan in one of his novels has a line that goes like "Humans cannot be universe conquerors if they don't overcome their bug like tendencies to invade and destroy". Nah, it is this very tendency that makes them universe conquerors. Nothing to beat good old fashioned greed and discontent.


Reasoned decisions (if you think that empire building is reasonable) without morality and empathy _are_ evil. This is how Putin operates.

Also, raping and torturing are very "evil for the sake of evil", if you ask me.


Yeah, discovering Nau’s chamber of horrors is meant to strip any illusions about his motivations.


Book spoilers.

IIRC wasn't it the chamber/ship of someone he worked with, that he tolerated? Read it like six or seven years ago, so the details are fuzzy. The impression I kept was that he did a lot of evil stuff not because he relished the suffering he created in others, but because he didn't mind it.


It's both. On one hand, he is aware that one of his valued subordinates "needs" to regularly murder people, and doesn't consider it an issue so long as that subordinate remains productive and is kept in check to avoid "wasting resources".

But there's also a record of him personally torturing and raping one of the captives for the sake of it - which he keeps around, presumably to rewatch every now and then.


Thats right, I forgot about Nau’s enforcer (whose name escapes me atm).

Anyway I think the point remains - after that point, the reader is meant to understand that the emergents self-serving justifications are just that.


Yes, it was not his chamber, but Nau never wanted one because he kept a pet in the open.


I second that. Re-read it multiple times and enjoyed every minute and every page. The creative concepts making up this book such as localizers/smart dust or the Focus captivated by their plausibility, and the unsolved mystery of the onOff bothered me as much as it did Pham Nuwen.

R.I.P. dear friend, you will be missed and remembered.


I once worked with a guy who was a close personal friend of Vernor, and I remember with much joy the enormous collection of science fiction he (the friend) had at his place .. literally every wall was covered in paperback shelves, and to my eyes it was a wonderland.

I casually browsed every shelf, enamoured with the collection of scifi .. until I got to what I can only describe as a Golden Book Shrine Ensconced in Halo of Respect - a carefully maintained, diligently laid out bookshelf containing every single thing Vernor Vinge had written. Everything, the friend said, including stuff that Vernor had shared with him that would never see the light of day until after he passed away. I wonder about that guy now.

It wasn't my first intro to Mr. Vinge, but it was my first intro to the fanaticism and devotion of his fan base - that in itself, was a unique phenomenon to observe. Almost religious.

Which, given Mr. Vinge's works, is awe-inspiring, ironic and tragic at the same time.

For me, it was a singular experience, realizing that science fiction literature as a genre was far more vital and important to our culture than it was granted in the mainstream. (This was the mid-90's)

Science Fiction authors are capable of inculcating much inspiration and wonder in their fans yet "scifi" is often used in a derogatory way among the literature cognescenti. Alas, this myopia occludes a great value to society, and I thank Mr. Vinge - and his fanboix - for bringing me to a place where I understood it was okay to value science fiction as a motivational form. That Golden Book Shrine Ensconced in Halo was itself a gateway to much wonder and awe.


Science Fiction - the literature - is so different from all other media forms of SciFi there needs to be a formal separate of Science Fiction Literature from SciFi films, live action and animated series, games, and comic books. These other forms, SciFi, are the cartoon abbreviated to something else that is fun, adventure but is not Science Fiction (Literature) and the existential examination of how Science Changes Reality.


Well, you have a point, but the separation is hardly as stark as all that. Both film and television have works that definitely qualify as Science Fiction, and some are even original rather than adaptations from books.

A few examples: 2001: A Space Odyssey, Fantastic Voyage, Moon, much of Twilight Zone and Outer Limits, Prospect, etc.

And for that matter there is plenty of SciFi in print as well, and it isn't all novelizations of movies and TV shows.

There is certainly more room for exposition in a novel (a fact which plenty of authors have abused over the decades, and plenty of directors have abdicated responsibility for by tacking on lengthy voiceover or on-screen text introductions), which often allows for more complete worldbuilding to explore whatever contrafactual premise the story is built around, but it is possible on-screen as well, as long as you don't rely solely on the dialog to convey it. For that matter, books shouldn't rely solely on the dialog for that purpose either.

Of course, on-screen dramatic works aren't the only ones that face the problem of conveying a setting in few words. Novellas, novelettes, and short stories have similar constraints to various degrees.


Absolutely, in the same way that there are tabloid forms of journalism, citizen, and authoritative forms, also.

For me the distinction is in the nature of speculation. If you speculate about some facet, and it seems feasible but fantastic, this is the event horizon at which the subject becomes useful as well as entertaining. It was no doubt of great utility to the original developers of satellites to have had Arthur C. Clarkes' models in their minds.

However, its hardly viable to speculate about regular use of teleportation or faster than light travel .. unless, of course, we end up getting these things because some kid read a story and decided it could be done, in spite of the rest of the worlds feeling about it ..


I knew "Fire upon the deep" would be a good book just few pages in, where in acknowledgements Vinge thanks "the organizers of the Arctic ’88 distributed systems course at the University of Tromsø".


Haven't read his fiction works yet, but his singularity piece is very interesting: https://edoras.sdsu.edu/~vinge/misc/singularity.html


"Rainbows End" is his singularity book.


I would argue that all of Vinge's longer works are about Singularitarian disasters. In Tatja we eventually figure out that Tatja herself is arguably the disaster. In Fire it was asleep in the library, and I think in Rainbows there's both Rabbit obviously and the weapon the story focuses on.

You can think of the apparent survival of Rabbit as a hint of doom right at the end, like the fact R's diary is in the slush pile at the end of the Watchmen comic book.


The Rabbit had a sense of morality. I do not think it intended to enslave or destroy humanity, or any other monstrous end. It kept bargains that it could have cheated, when cheating those bargains cost it nothing. This is at least a hint of a sense of justice. The Rabbit was likely the adversary of some other entity, perhaps something very Blight-like.


I disagree with your characterization of Vinge's works as primarily about disasters but I agree they were all about an accelerating technological pace and its relation with intelligence.

I'm fairly certain the mysterious event in Marooned in Realtime was Ascension.

For Fire Upon Deep, it was sealed and there was a powerful countermeasure.

The rabbit of Rainbows End felt like a trickster to me. Child-like playfulness, fey-like chaotic neutral at worst. I do not interpret Rabbit's survival as hints of doom. The weapon was plain old human abuse of power for control.


I think I've said "catastrophes" before rather than "disasters" and I think that's a better word, but I stand by it.

It doesn't matter that Rabbit doesn't intend harm. Neither does Tatja, at least to those who aren't trying to harm her. But well, look at what she does, at first she almost gets a few dozen people killed, reckless teenager but hardly extraordinary, next time we see her she's about to tear apart a kingdom to fraudulently seize power, and as collateral she's (without telling them) ensured everybody she knew previously will die if she fails. By the end Tatja has started a war in order to seize control of a means to signal off world. Only two other people on her world even realises what "signalling off world" would even mean, but she's potentially going to kill huge numbers of people to achieve it anyway. She's a catastrophe even though that wasn't her intent. She does apologise, for whatever it's worth, right at the very end, to people who were close to her and from whom she belatedly realises she is now so distant.

Rabbit is indeed just playing. When the library nearly falls over and kills a lot of university staff and students, that's just a small taste of what happens when playful Rabbit forgets for a moment that this isn't really just a game. Consider just how powerful Rabbit is remembering that's a distraction. The whole fight, which causes massive disruption to the city and easily could have led to enormous loss of life, isn't what Rabbit was really doing, it was just to distract Bob's team so that they don't focus on the labs for a few hours. And remember that Rabbit's goal here is clearly to secure the weapon for itself, not to deny it to the antagonist.


This is a compelling argument, but I think it's overly pessimistic. Back on the human side, the ending sees Robert adapting to his situation; he loses his left arm (his "sinister"), and it looks like he's lost his wife for good, but he's managed to find some amount of synergy with the new world and technology he's surrounded by. Combined with Rabbit's temporary "defeat" (an experience that, if he's truly a super-intelligence capable of true learning and growth, should lead him to different means and even ends in the future, if nothing else), the implicit conclusion seems to be a future with an imperfect but livable melding of humanity and technology. Not too different from what's come before. Putting all of human history onto a single drive likewise might seem like a diminishing of its significance, but the fact is that it's still there to dive into, should one desire. That's arguably a step up from the past.


Vinge's technological singularity is explosion of things changing, not "rapture of nerds".


I'm half-convinced that the Rabbit was an ancient trickster god, and not an AI. Is AI even the correct term? If the Rabbit was a technological non-human intelligence, then surely it was never created (even by accident), and emerged/grew from the computosphere. No governments seemed to be aware of any other government having created it, and two of the nearly-main characters were special operatives tasked with knowing about shit like that and shutting it down before it could result in doomsday scenarios.

I suspect very strongly that had we gotten a followup or two, it would have turned out that the Rabbit had been around for a very long time before even the first transistor.


Marooned in Space Time is about people who missed the singularity.


If you haven't read A Fire Upon The Deep (or even if you already have), you can read the prologue and first few chapters here: https://www.baen.com/Chapters/-0812515285/A_Fire_Upon_the_De...


Considering his novel Rainbows End is about a very sick author in his 70s getting brought back to the world by modern technological breakthroughs in the mid 2020s, I feel like we’ve let him down in some way. Maybe he knew he was already sick, even back then, maybe not. Your meticulous and inspiring level of detail will be missed.


After reading this and all the comments on this thread I think I will pick up some of his books.

Too much science fiction nowadays is dystopian, cynical and pessimistic. I don’t have a problem with any individuals writing stuff like that if they really want to. People should have the freedom to write whatever they want. I just personally feel like there is too much of the cynical pessimistic stuff being written nowadays.

So seeing that Vernor Vinge wrote stores that portray science and humanity in positive hopeful and optimistic ways makes me very interested in reading his work.


Yes, do! I've only read A Deepness in the Sky and A Fire Upon The Deep but they were absolute joys, despite their intimidating page count. Just mind-bendingly inventive and continually interesting. I won't ruin anything for you, but as a reader you make assumptions which then turn out not to be true via progressive revelation as the books go on. Brilliant stuff.


No one else has mentioned what I think are his two greatest insights besides the Singularity:

* A Deepness in the Sky depicts a human interstellar civilization thousands of years in the future, in which superluminal travel is impossible (for the humans), so travelers use hibernation to pass the decades while their ships travel between systems. Merchants, including the ones the book portrays, often revisit systems after a century or two, so see great changes in each visit.

The merchants repeatedly find that once smart dust (tiny swarms of nanomachines) are developed, governments inevitably use them for ubiquitous surveillance, which inevitably causes societal collapse. <https://blog.regehr.org/archives/255>

* In said future human society pretty much all software has already been written; it's just a matter of finding it. So programmer-archaeologists search archives and run code on emulators in emulators in emulators as far back as needed. <https://garethrees.org/2013/06/12/archaeology/>

(Heck, recently I migrated a VM to its third hypervisor. It has been a VM for 15 years, and began as a physical machine more than two decades ago.)


A lot of love here for A Fire Upon the Deep (predicted fake news via "the net of a thousand lies") and A Deepness in the Sky (great depiction of cognitive enhancement, slower-than-light interstellar trade), but less so for Rainbows End, which is perhaps a less successful story but remains, after almost two decades, the best description of what augmented reality games and ARGs might do to the world.


> A Fire Upon the Deep (predicted fake news via "the net of a thousand lies")

Predicted ? A Fire Upon the Deep published in 1993, at which date Usenet was already mature and suffering such patterns - although not at FaceTwitTok scale.

But still, I love Vinge's take on information entropy across time, space and social networks. A Deepness in the Sky features the profession of programmer–archaeologist and I'm here for that !


> Predicted ? A Fire Upon the Deep published in 1993, at which date Usenet was already mature and suffering such patterns

I still remember the moment when I realised that the galactic network in 'Fire...' was in fact based on Usenet (which I used heavily at the time), especially how it was low bandwidth text (given the interstellar distances) and how it had a fair number of nutters posting nonsense across the galaxy ('the key insight is hexapodia'). Great author, who'll be sadly missed.


Skrodes have six wheels, so…


I recently re-read Rainbows End, and I think "do to the world" is an appropriate phrasing. It's a strikingly unpleasant vision of a world in which every space is 24/7 running dozens of microtransaction AR games... I found the part where Juan walks through the "amusement park" particularly effective, where little robots would prance around trying to entice him into interacting with them (which would incur a fee).


The ARG in Rainbows end seems more realistic and somehow even enjoyable than anything ive ever heard pitched by a real company.


I think it's also one of the best descriptions of living at the onset of massive, disruptive technological changes, and how disorienting (and occasionally terrifying) this would feel. The fundamental problem with that book, for me, is that the main protagonist is (deliberately) an utterly loathsome individual, who somehow ends up as a good guy but doesn't seem to do very much learning or self-reflection.


I just finished reading Children of the Sky and re-reading A Deepness in the Sky. I've been finding with Vinge's work, along with Iain Bank's works, a lot of it is better the second time around. There's just so much to take in.


Unfortunately Banks passed away more than 10 years ago now. It doesn't seem like that long ago that his last book came out.


He wrote some of my favorite sci-fi books. I was aware he wasn't in good health for a while already, it's still sad to hear about his passing of course. Thank you for the worlds you showed me.


My favorite author of all time. 80 years is a good run, but I wish he’d seen another 20.

I would’ve loved to read his reaction to the 2020s. Rainbow’s End is by far the best prediction of what this decade has been like, from 30 years ahead.

I wish we’d gotten to read a few more books from Vinge.


His larger works are getting a lot of praise (justifiably, I read "A fire upon the deep" on a friend recommendation, then everything else Vinge wrote), some of his short stories strongly resonated with me, too.

The cookie monster is, IMO, a thought-provoking marvel.


Thank you for recommending this! I hadn't read it, and it was delightful. It's online: https://www.ida.liu.se/~tompe44/lsff-book/Vernor%20Vinge%20-...


Oh, that was a great read! I'd never come across it before. Thank you for posting it.


Oh man, I sincerely hope he was signed up for cryonics. If there was someone who deserved to see what the future holds, it was him.


From what I’ve read, cryonics seems like a massive scam pulled on rich people. The tissue damage in these frozen corpses is extensive and irreparable.


Oregon Brain Preservation offers cryopreservation for $5,000 (or less if you can't afford that), Cryonics Germany offers free cryopreservation, and even the most expensive providers (Tomorrow Biostasis, Alcor, and Yinfeng) are affordable through life insurance. Most of us aren't wealthy and some of us are working class. Since 2000, vitrification has replaced freezing, dramatically reducing damage, and even frozen people might be recoverable centuries from now. It's offered on a nonprofit, experimental basis by organizations with public financial statements.


Well said! If anything, I'd say the money spent on traditional funerals is more of a scam but nobody seems to talk about that.

Expensive coffins, elaborate headstones, burial plot sales, etc.

As soon as somebody tries to spend their money in a way that might actually benefit them, people get defensive or try to justify death as noble or natural.


> irreparable

That’s the gamble. I think you’re right though, it’s far lower odds than the snake oil salesmen present.


The alternative of cremation is still lower odds.


Even if some future technology could repair the damage, it’s a big gamble that someone in the future will want to repair the damage.


So you’re saying there is some chance then. Can we let go of the pseudoscience and quackery quote from the early 90s then?


Then you obviously haven't read much about cryonics, which involves vitrification rather than freezing to avoid such tissue damage.


In real medical cryogenics, e.g., embryo preservation, vitrification is spoken of as a kind of freezing, which, of course, it is. Only cryonics advocates claim that vitrification isn't a kind of freezing.


If the topic is tissue damage from sharp ice crystals, it's pretty handy to draw the distinction between cooling methods that cause that and ones that don't.


Yes, that's the relevant distinction in fact. Cryonics are the former, not the latter. Multicellular cryonic suspension is an unsolved problem after roughly the blastocyst stage.


As of last year we're up to doing rat kidneys. They're "heavily" damaged but they recover within a few weeks. To be sure, there's a long way from that to near-perfectly preserving a human brain, let alone a whole body.

https://www.statnews.com/2023/06/21/cryogenic-organ-preserva...


Yes, that is a living rat-sized kidney. Not a dead human-sized brain. And on a pass-fail grade, I'm giving that experiment a fail. Promising, yes.

Cryopreservation of corpses is a scam designed to fleece rich people with an extraordinary fear of death. Some justify it to themselves as supporting research which might lead to effective corpsicles, but to support such research they could simply donate to it. Not waste their money on an elaborate and expensive embalming with no hope of salvation.


By that logic, computers are a dead end because Babbage's Difference Engine No. 1 never really worked properly... Or that space travel is impossible because a lot of early rockets blew up on the pad.

I don't understand this kind of pessimism at all.


It might work someday, maybe. But it won't work now. The corpsicles which currently exist are just as dead as if they were cremated. I understand, sort of, the psychology of people who lie to themselves about this, but that's all that's happening.


You don't know that because you don't know the physical limits of reanimation technology. In 2014, a human brain was vitrified with no ice crystallization or fracturing for the first time. Certainly, the first viable preservation will occur (if it has not already occurred) long before the first reanimation, and eventually discovering that we began to preserve people too soon would be much better than discovering that we began not soon enough. Even the primitively frozen might be retrievable centuries from now.


You don't know what future technological capability will be. Current Alcor and CI patients are preserved well enough that their bodies and brains could in theory be repaired by technology at the physical limits of possibility. The information is there.

You are saying that existing technology is unable to fix the issues of vitrification. That is correct, but irrelevant.


Scam implies intent and someone benefiting. These cryonics organizations are nonprofits run by members.


Well, I've read an article about some people getting flushed down the drain because the company that was supposed to keep them frozen kinda went out of business.


What evidence do you base those beliefs on?


Checking Alcor¹ and the Cryonics Institute² suggests no :-/

¹: https://www.alcor.org/news/ ²: https://cryonics.org/case-reports/


I doubt either site would be updated with info about an ongoing cryopreservation.


Vinge introduced me to space opera with zones of thought. Such amazing books I've read multiple times.


RIP to author of one of my favorite books.

If Vernor Vinge doesn't deserve the black banner atop HN, then nobody does.


Oh man, that's sad to hear. I really loved his books, especially the ones that looked into the future from a modern day engineer point of view. "Rainbows End" comes to my mind quite often when as I read the tech news, it paints a picture of a future that seems to get closer day by day - a sci-fi that you can one realistically believe to live in one day.


Aw, man. This is a bummer, considering how deep into "replicating Rainbows End" we are (despite everyone and their mother's insistence that we try for a "Ready Player One" future). I find it funny that it seems to be one of his least-liked novels, because the concepts and characters it plays with have always been more approachable and relatable - and less terrifying - than in much of his other work (insofar as I can tell, being wary of reading them).

I still maintain that Miyazaki needs to adapt RE before he heads out himself: https://imgur.com/a/8PeXHlb


On Vernor Vinge's connection to free software:

https://lwn.net/Articles/310463/


He was no slouch when it came to programming.

He taught classes that going through the actual 68000 assembly to perform the context switch between threads in an interrupt service routine (copy the saved registers from the running thread on the stack to a separate area, and overwrite them on the stack with the registers from the thread you want to switch to).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: