This post is an exercise in “identifying with the algorithm.” I’m a big fan of the probabilistic method and randomized algorithms, so my biases will show.
How do human beings produce knowledge? When we describe rational thought processes, we tend to think of them as essentially deterministic, deliberate, and algorithmic. After some self-examination, however, I’ve come to think that my process is closer to babbling many random strings and later filtering by a heuristic. I think verbally, and my process for generating knowledge is virtually indistinguishable from my process for generating speech, and also quite similar to my process for generating writing.
Here’s a simplistic model of how this works. I try to build a coherent sentence. At each step, to pick the next word, I randomly generate words in the category (correct part of speech, relevance) and sound them out one by one to see which continues the sentence most coherently. So, instead of deliberately and carefully generating sentences in one go, the algorithm is something like:
- Babble. Use a weak and local filter to randomly generate a lot of possibilities. Is the word the right part of speech? Does it lie in the same region of thingspace? Does it fit the context?
- Prune. Use a strong and global filter to test for the best, or at least a satisfactory, choice. With this word in the blank, do I actually believe this sentence? Does the word have the right connotations? Does the whole thought read smoothly?
This is a babble about embracing randomness.
Research on language development suggests that baby babble is an direct forerunner to language. You might imagine that infants learn by imitation, and that baby babble is just an imperfect imitation of words the baby hears, and progress occurs as they physiologically adapt to better produce those sounds. You would be wrong.
Instead, infants are initially capable of producing all the phonemes that exist in all human languages, and they slowly prune out which ones they need via reinforcement learning. Based on the sounds that their parents produce and respond to, babies slowly filter out unnecessary phonemes. Their babbles begin to drift as they prune out more and more phonemes, and they start to combine syllables into proto-words. Babble is the process of generating random sounds, and looking for clues about which ones are useful. Something something reinforcement learning partially observable Markov decision process I’m in over my head.
So, we’ve learned that babies use the Babble and Prune algorithm to learn language. But this is quite a general algorithm, and evolution is a conservative force. It stands to reason that human beings might learn other things by a similar algorithm. I don’t think it’s a particularly controversial suggestion that human thought proceeds roughly by cheaply constructing a lot of low-resolution hypotheses and then sieving from them by allowing them to play out to their logical conclusions.
The point I want to emphasize is that the algorithm has two distinct phases, both of which can be independently optimized. The stricter and stronger your Prune filter, the higher quality content you stand to produce. But one common bug is related to this: if the quality of your Babble is much lower than that of your Prune, you may end up with nothing to say. Everything you can imagine saying or writing sounds cringey or content-free. Ten minutes after the conversation moves on from that topic, your Babble generator finally returns that witty comeback you were looking for. You’ll probably spend your entire evening waiting for an opportunity to force it back in.
Your pseudorandom Babble generator can also be optimized, and in two different ways. On the one hand, you can improve the weak filter you’re using, to increase the probability of generating higher-quality thoughts. The other way is one of the things named “creativity”: you can try to eliminate systematic biases in the Babble generator, with the effect of hitting a more uniform subset of relevant concept-space. Exercises that might help include expanding your vocabulary, reading outside your comfort zone, and engaging in the subtle art of nonstandard sentence construction.
Poetry is Babble Study
Poetry is at its heart an isolation exercise for your Babble generator. When creating poetry, you replace your complex, inarticulate, and highly optimized Prune filter with a simple, explicit, and weird one that you’re not attached to. Instead of picking words that maximize meaning, relevance, or social signals, you pick words with the right number of syllables that rhyme correctly and follow the right meter.
Now, with the Prune filter simplified and fixed, all the attention is placed on the Babble. What does it feel like to write a poem (not one of those free-form modern ones)? Probably most of your effort is spent Babbling almost-words that fit the meter and rhyme scheme. If you’re anything like me, it feels almost exactly like playing a game of Scrabble, fitting letters and syllables onto a board by trial and error. Scrabble is just like poetry: it’s all about being good at Babble. And no, I graciously decline to write poetry in public, even though Scrabble does conveniently rhyme with Babble.
Puns and word games are Babble. You’ll notice that when you Babble, each new word isn’t at all independent from its predecessors. Instead, Babble is more like initiating a random walk in your dictionary, one letter or syllable or inferential step at a time. That’s why word ladders are so appealing – because they stem from a natural cognitive algorithm. I think Scott Alexander’s writing quality is great partly because of his love of puns, a sure sign he has a great Babble generator.
If poetry and puns are phonetic Babble, then “Deep Wisdom” is semantic Babble. Instead of randomly arranging words by sound, we’re arranging a rather small set of words to sound wise. More often than not, “deep wisdom” boils down to word games anyway, e.g. wise old sayings:
“A blind person who sees is better than a seeing person who is blind.”
“A proverb is a short sentence based on long experience.”
“Economy is the wealth of the poor and the wisdom of the rich.”
Reading is Outsourcing Babble
Reading and conversation outsource Babble to others. Instead of using your own Babble generator, you flood your brain with other people’s words, and then apply your Prune filter. Because others have already Pruned once, the input is particularly high-quality Babble, and you reap particularly beautiful fruit. How many times have you read a thousand-page book, only to fixate on a handful of striking lines or passages?
Prune goes into overdrive when you outsource Babble. A bug I mentioned earlier is having way too strict of a Prune filter, compared to the quality of your Babble. This occurs particularly to people who read and listen much more than they write or speak. When they finally trudge into the attic and turn on that dusty old Babble generator, it doesn’t produce thoughts nearly as coherent, witty, or wise as their hyper-developed Prune filter is used to processing.
Impose Babble tariffs. Your conversation will never be as dry and smart as something from a sitcom. If you can’t think of anything to say, relax your Prune filter at least temporarily, so that your Babble generator can catch up. Everyone starts somewhere – Babbling platitudes is better than being silent altogether.
Conversely, some people have no filter, and these are exactly the kind of people who don’t read or listen enough. If all your Babble goes directly to your mouth, you need to install a better Prune filter. Impose export tariffs.
The reason the Postmodernism Generator is so fun to read is because computers are now capable of producing great Babble. Reading poetry and randomly generated postmodernism, talking to chatbots, these activities all amount to frolicking in the uncanny valley between Babble and the Pruned.
Tower of Babble
A wise man once said, “Do not build Towers out of Babble. You wouldn’t build one out of Pizza, would you?”
NP is the God of Babble. His law is: humans will always be much better at verifying wisdom than producing it. Therefore, go forth and Babble! After all, how did Shakespeare write his famous plays, except by randomly pressing keys on a keyboard?
NP has a little brother called P. The law of P is: never try things you don’t understand completely. Randomly thrashing around will get you nowhere.
P believes himself to be a God, an equal to his brother. He is not.
Anecdotally, a lot of really creative people I know are very strong babblers. (A particular young USAMO winner comes to mind….)
[…] my last babble, I introduced the Babble and Prune model of thought generation: Babble with a weak heuristic to […]
[…] Previously, I described human thought-generation as an adversarial process between a low-quality pseudorandom Babble generator and a high-quality Prune filter, roughly analogous to the Generative Adversarial Networks model in machine learning. I then elaborated on this model by reconceptualizing Babble as a random walk with random restarts on an implicitly stored Babble graph. […]
[…] previously speculated on salient divisions of my own internal processes into subpersonalities, e.g. Babble and Prune, Chinese and English. For now, the exact details of how subpersonalities should be split are not […]
[…] judgment, trust and forgive people. Feel small when you stand besides the ocean. Babble like a baby. Try stupid […]
[…] of my recent posts are essentially Derivations of other people’s ideas (see for example Babble, Singularity […]
[…] model resembles – and perhaps generalizes – the interaction of Babble and Prune, where conscious Prune filters are slowly pushed down into the subconscious Babble. Learning occurs […]
[…] post is Part 5 of the sequence on Babble. After writing Hammers and Nails, I figured out that my favorite Hammer is […]
[…] post is, in some sense, a followup to Babble. As such, don’t be surprised if it doesn’t make […]
[…] framing the problem and offering a partial solution. As always, people need to allow themselves to babble […]
[…] embarrassed to share that seem very slightly relevant to the problem. Just lower your filters and babble them […]