Daniel Schmachtenberger studies existential risk. Here’s how we might thread an Alt-Middle way to a civilization whose tech, healthcare, social structures, and built environment promote well being rather than divisional apocalypse.
– [Zubin] Hey guys, Dr. Z. Welcome to “The ZDoggMD Show.” Today, I have a guest I’ve been wanting to get on forever and ever and ever. You guys know I’m really interested in the social dilemma, in other words, the effect that social media and these exponential crazy technologies that hack our dopamine centers have had on us, the mental health of children, and so on. A lot of my knowledge on that comes from our guest today, and his partner, Tristan Harris, who runs the Center for Humane Technology. Daniel Schmachtenberger is the founder and director of The Consilience Project, and he is an expert in the , the catastrophes that can end civilizations, and how we might thread that narrow needle to avoid them. Daniel, man, welcome to the show.
– [Daniel] It’s good to be here with you, man. Happy that we’re finally here.
– [Zubin] Dude, I just saw you on Rogan with Tristan, and I was like, “Oh, man, they’re hitting all the buttons “on civilization-level threats,” and then I thought, “Well, what are we gonna talk about “when you come on my show?” ‘Cause you just were on Rogan. You’ve been on Lex Fridman. You’ve been all over the place. And then we started talking before the show, and immediately, I was like, “Oh my God, this is a conversation no one’s having,” and it’s a, let’s start it this way. We have these God-level technologies now that really prey, and I’m talking about social media in particular, but there’s other technologies that really prey on our innate brain’s addictive capacity, and really hacks it in ways that are affecting our health not just individually, but the health of our entire society. Maybe that’s a good starting point for jumping into what you’re passionate about, because this is stuff that you’ve dedicated your life to kind of studying, how we can avoid catastrophe.
– [Daniel] Okay, so you say godlike tech, and if it’s not obvious for people, so that it doesn’t sound like hyperbole, maybe we just take a moment, and say what we mean by that. When we have the ability to do synthetic biology, like actually make life, and the ability for genetic engineering, CRISPR, other methods, to be able to change the fundamental base code of life, artificial intelligence that can do calculations that no type of biological intelligence can do, when we can extinct whole species, destroy whole ecosystems, build new environments, where the Anthropocene, the human-built environment is the major force shaping whole parts of the surface of the world, that’s not the power of an apex predator, right? No, like orcas aren’t doing all that. Are you sure? I don’t know if you’ve read “The Hitchhiker’s Guide “to the Galaxy,” but I believe dolphins did a lot of damage.
– [Daniel] So trying to model humans as, you know, one of the primate hominids, and apply apex predator theory, it just clearly doesn’t explain the kind of thing that we are. We’ve had a curve of being able to develop tech faster than the environment can become resilient to the way we use that tech, right? Predators increase, there’s a co-evolutionary process where predators increase their predatory capacity through mutation at the same rate that the prey is increasing its capacity to avoid the predators. The slower ones get eaten. The faster ones inbreed, that kind of thing. We were able to increase our predatory capacity way faster than any animal was able to increase their resilience to it. So we were able to overhunt an environment, and then rather have our population check to that environment, we were able to move, and become the apex predator in the next environment. Polar bears are not apex predators in the savanna. Orcas aren’t on land. We went and became apex spreaders everywhere. We’re able to overhunt an environment, and then enslave them through factory farming, and other things like that. So this really different, it’s a different thing, and-
– [Zubin] We’re a whole new genre of being, right, to be able to do that, and is that, and not to derail you, but is that because we have the capacity for language, communication across large groups in flexible ways that, say, bees, or ants don’t have, or orcas don’t have?
– [Daniel] Yeah, there’s a type of abstraction that lends itself towards the nature of our language, and very specifically towards the nature of our tech. Obviously, we’ll see that beavers use, they modify their environment in a particular way. Birds with their nests do. We’ll obviously see whether it’s birds, or primates’ use of certain kinds of objects in their environment that we can call a tool, but they’re not evolving the way they do that as time goes on. They co-evolved with some environmental capacity that they found. So a chimp will use a rock to cut something. They’ll experientially try one rock versus another, and whichever one they can tell is cutting better they’ll use, but to understand the abstract principle of sharpness, what does this one have more than this one to design one that is sharper, is a thing they don’t do, right? But it is a thing that Homo habilis started doing, which was how do we actually start to create stone tools? ‘Cause we understand the abstract principle of sharpness, and can make a sharper thing, which is understanding what all these have in common, but it’s actually not any of them. And then, so that process of what humans were doing that was different started with the beginning of stone tools. Obviously, it took a major jump with the Agricultural Revolution, took a major jump with the Industrial Revolution, nuclear, and then obviously with digital. So we find ourselves in the verticalizing part of a long exponential curve of power, and you can say now the power of gods, and yet, not necessarily a corresponding degree of increase in wisdom for how to use that power, and all of the existential threats are because of that, because obviously stone tools, or even agrarian tools, can’t destroy the whole world. So catastrophic risks are always, all the human-induced catastrophic risks are mediated by the level of technological power we have that doesn’t have adequate wisdom, prudence binding and guiding its use.
– [Zubin] So this is a great base to start this discussion from because this idea of all of human history starting with Homo habilis, you said, so this was one of our progenitors, able to use tools, abstract, and actually understand the concept of potentially of sharpness, and that allows a kind of recursive improvement in tools that a chimpanzee, or a primate can’t do, because like you said, they’re co-evolving that use of tool with their environment. So they, maybe they notice experientially, like you said, that this piece of stone cuts a little better, so they start using it. But they can’t de novo go, “Oh, it cuts better because of sharpness. “I bet I could design a sharper tool.” So we have this really interesting ability to do that, and over tens of thousands of years of Homo sapiens evolution, it’s now reaching a phase, maybe longer, right, it’s now reaching a phase of exponential growth, because the technology itself is allowing a recursive improvement in the technology, right?
– [Daniel] And so, you know, you were starting to talk about addiction, and kind of the hacking of dopamine circuits and reward circuits. And one way to think about that is that Homo sapiens are at least a few hundred thousand years old, and obviously, the hominid precursors to that are at least a few million years, and all of that genetic evolution was fit for hunter-gatherer environments, right? Like we were evolving reward circuits that were relevant for hunter-gatherer environments.
– [Zubin] Like what, like sweet taste, or-
– [Daniel] Yeah, so if we take the reward circuits on food, specifically, fat, salt, and sugar create stronger dopaminergic process than green cellulosic stuff, and whatever else, right?
– [Zubin] And dopamine’s our reward chemical to make it very simple.
– [Daniel] Feels good, do it again. Dopamine-opioid axis, right?
– [Zubin] Yeah.
– [Daniel] So in an evolutionary environment, you didn’t have a lot of salt, fat, or sugar, right? The sugar that you had was only pretty much gonna come from things like fruits that were gonna be seasonal, and before we hybridized the fruits, the fruits had way more seeds, and way less fruit than they have now, right, in terms of most-
– [Zubin] Fiber to sugar content, yeah.
– [Daniel] And so, getting all of that you could that had more caloric density meant that you had a better chance of making it through a famine, and possibly being able to feed more people. The same was true with fats, and the same was true with when you’d find a salt source. And so, there was evolutionary selective advantage on getting all of those you could. So those who had more reward circuit on those actually did better. Once we figured out how to make all the salt, fat, sugar you want, and be able to combine them in the ways that maximized addiction, right, and remove all the micronutrients, remove all the vitamins and minerals, and just basically all of fast food is just combining salt, fat, sugar in novel ways with maximum texture palatability, and to drive maximum kind of addictive process separated from all the micronutrients. You actually took something that, where the evolutionary reward was attached to a real thing. Now, you have, of course, overweight and obesity being one of the main causes of health issues, and yet the reward circuit hasn’t changed. And so, you were able to extract the active ingredient from the thing that it was bound to that was actually useful. And the same is true with porn to sex. The same is true with social media to tribal friendships. The same is true with productivity to meaning, right? There’s a whole bunch of places, where the sexual impulse historically would have been attached to that mating opportunity, was bound to you were going to have children for your, there was all kinds of intimacy and things that were required to have that happen. Just extract the hypernormal parts, and maximize them, and you get porn.
– [Zubin] Ah.
– [Daniel] Same with likes on Facebook, right? You have no real friend. You don’t have authentic intimacy. You don’t have authentic trust. But you have lots of positive feedback of certain types, which, of course, then is gonna give you the equivalent of overweight psychologically, which is like narcissism and emptiness.
– [Zubin] Ooh.
– [Daniel] So you can see that society has been oriented to hijack reward circuits, be able to extract the active ingredient, double down on that separate from the actual evolutionary beneficial thing that it was originally for. And that’s one of the things is we’re not genetically fit for this environment. We’re not genetically fit for the Anthropocene. And yet, we’re an adaptive enough creature, we can adapt, but adapt, the type of creature we become adapting to that environment might not be the type of creature we would choose to be otherwise.
– [Zubin] Okay, this is the central foundation of everything we’re gonna talk about, because the Anthropocene, this age of man, it’s so interesting that man has those capacities and evolves to a point where it can generate technologies in a society that actually feed back to its primitive brain in a way that it almost is compelled to do, like, “Oh, wait, so sugar tastes good, fat tastes good, “salt tastes good.” Fast food inevitably emerges, because we know, and especially when you tie it, which we can talk about, to economic incentives, and capitalism, et cetera, that allow a feedback loop that’s, oh, you can actually get, you know, the productivity boost of making money, being secure, et cetera, from selling stuff that actually really plugs right into our limbic mind. And now, we have a world where we have this, you said it on Rogan, we have these paleolithic minds that are in a situation with this godlike technology that really hacks the paleolithic mind to a degree. So whether it’s fast food, whether it’s social media, whether it’s porn, and porn’s something I’ve talked about long time ago, but this, it is. It’s a decoupling of that reward dopamine circuit from all the meaning, the relationship, the intrinsic connections that sex meant historically, even when it was less of that, like you know, there were polyamorous societies, and all of this. That’s all fine, but the truth is it had more connection and meaning than it does when you’re surfing porn and masturbating, right?
– [Daniel] I mean, let’s just, we won’t take too long with this, but-
– [Zubin] You can take as long as you want with porn, because honestly, I could talk about it all day.
– [Daniel] Even if you look at the multi-male, multi-female earlier mating systems that you’d call polyamorous mating systems, like the ones that “Sex at Dawn” talks about, the Mosuo, or the Canela people, or whatever, the modern interpretation of them, oftentimes, the people who are proponents for it are failing to recognize that they had a completely different social system that made that adaptive that also included a different economic system, right? So you have multi-male, multi-female mating, but the whole tribe raised all the kids.
– [Zubin] Uh-huh.
– [Daniel] The woman wasn’t left as a single mom, because she lived with her family, not with her, not with her primary, not with the partner that inseminated her. All of the fathers, ’cause there was low paternity certainty, all of the fathers were contributing to all of the kids. You actually didn’t have a primarily private property based system, but commons based system.
– [Zubin] Uh-huh.
– [Daniel] So you can’t extract that one part from the whole rest of it and have it work, right? So obviously, that system could only be adaptive within that context. This was actually a very interesting conversation Bret Weinstein and I had many, many years ago, ’cause a lot of his evolutionary theory background was primate mating systems, and-
– [Zubin] Oh, wow.
– [Daniel] And he was-
– [Zubin] Monkey sex.
– [Daniel] Arguing for why institutional monogamy did some really important things that a lot of people didn’t understand where the sexual revolution really damaged it, and I don’t wanna put the wrong words in Bret’s mouth. I’m gonna share kind of how I understand it that was inspired by that conversation with him. He might hold this with more nuance, but it was, it actually upgraded my thinking on it. He said if you think about what institutional, like when you think about what we call civilization, to be civilized is kind of a domestication program for wild humans to be able to operate together at larger than tribal scale-
– [Zubin] Yeah, makes sense.
– [Daniel] And for hundreds of thousands of years, we never got tribes bigger than about 150 people. They stayed at very small scales, where everybody knew everybody, so that the sacrifices you made for others were non-anonymous people. They were people that you knew really well, and that you wouldn’t do that at much larger sizes. And so, then the much larger thing, the thing we call civilization, can also be thought of as a domestication program.
– [Zubin] Huh.
– [Daniel] And the main things you have to domesticate out of people that make them not work other well have to do with sex and violence. And so, this is also where most of the psychological shadow comes, and it’s why the intersection of sex and violence is the deepest part of most people’s psychological shadow.
– [Zubin] Oh.
– [Daniel] They’re kind of put into the same areas. But if you think about what institutional monogamy occurring with that context, one of the things that it was designed, or a few of the things it’s designed to do, if you couldn’t have sex until getting married, and then you weren’t allowed to divorce, and that was actually held, and of course, no system will be perfectly held, but like just the idea, then in order to get laid, a guy had to get a girl’s parents, and preacher, and community to decide that he was an acceptable husband and father.
– [Zubin] Hmm.
– [Daniel] And that, and the binding of his ability to get laid to his ability to be a good father long-term meant that there was an incentive for him to actually be a good guy long-term, which meant who, him being a good guy for civilization and him being a good guy as a father were bound to his need to get laid, right? And that there was a vetting beyond her, who might have already got oxytocin to not be, you know, assessing him well, ’cause of the crush that the father, and the mother, and the whatever would also be helping to assess, to grant the right to do the thing. And then in the wedding ceremony, does anyone object? If he had been an asshole to other people, and they got to bring that up, so he has to be an asshole to nobody, otherwise, he’s never gonna be laid. Then this would also be even where the slut-shaming came, which is such a terrible thing in our modern context, but I was coming to understand where a possible evolutionary relevance of it was-
– [Zubin] Yeah, yeah.
– [Daniel] Was that if any women would start to have sex with guys outside of the marriage context, which would mean that assholes could get laid, it creates an evolutionary niche for assholes to actually be able to make it, and then those guys figure out how to get more women to do that thing, and so, the idea was almost like herd immunity. The idea was a collectivist idea. You actually have to close the niche for assholes comprehensively. If you want a civilization to go well, guys are going to do what they need to do to get laid. So if you bind the opportunity to get laid to being a good citizen with multiple people vetting it, that’s a good system. It’s actually interesting, right? Like most people, myself included, who kind of grew up in a more post-sexual revolution, liberated idea, thought of that as just oppressive nonsense.
– [Zubin] Right.
– [Daniel] And then I’m like, “Oh, that’s actually interesting.” So then the idea was that with birth control, you kind of have a sexual revolution. It seems like it’s liberating for women in particular, because you decouple sex and reproduction for the first time.
– [Zubin] Right.
– [Daniel] Where historically you could never really decouple sex and reproduction well, which is also why that was gonna inexorably affect her biology more than his, because he could possibly get laid, have a genetic benefit to do so, and have no consequence, and she could not have no consequence. It’d be a life or death possible thing for her, right? And so, of course she’ll have higher criteria and more bonding biologically oriented, which makes perfect sense that it should be that way. But then she’s able to kind of let that go and be a liberated modern person because of the birth control pill, but her evolutionary biology hasn’t changed. Again, just like I can’t eat all of the chocolate cake that I want and not get fat just cause I want to, right? The biology is the way that it is-
– [Zubin] The biology’s still there-
– [Daniel] I don’t get to just separate the reward circuit.
– [Zubin] Right.
– [Daniel] And so, then the idea that after that there was more of an evolutionary niche where assholes could get well-laid as a result of that, and that that actually has a culture-damaging property. So if you wanna go the convention of marriage, if you wanna not drop to pre-conventional developmentally meaning more selfish than the convention, but go to truly post-conventional, the post-conventional has to be, okay, well, how do I have, how do the individuals have more freedom than the institution of marriage to make that choice while still paying attention and not creating niches for bad behavior to be able to propagate? Just an interesting topic.
– [Zubin] You know, it’s , man, I didn’t think we’d be talking about that, but it’s fascinating, and you know what’s really interesting about it is it binds this idea that you have this particular hardware, this biology that’s evolved over however many millions of hundreds and millions of years, but then you have the software of the world that feeds back onto it, and emerges from it. So the particular constraints of the hardware may create a social dynamic where, again, guys have to be good, they are vetted by the family, wait ’til marriage to get laid, reduce asshole agenesis, and that helps civilization, and helps the short-term marriage. And so, that dynamic feeding back with the biology creates the emergent civilization. Now, we’re in a situation where we’ve, and like you said, a good example of birth control, and again, neither one of us is saying, “Oh, dude, get rid of birth control, “go back to a dowry,” right? No, we’re saying the environment and the hardware actually interact, and we better be aware of how that happens, because now it’s happening in a way that may not lead to a civilization that we want collectively.
– [Daniel] Yeah, I mean, you’ve addressed the thesis-antithesis issue, and kind of the distinct moral intuitions that cluster more on the right and more on the left. And if you were to try to make a dialectic like that, like say traditional and progressive, or conservative/traditional and progressive, innovative, one way of framing, there’s lots of ways of framing this. There’s problems with all the things I’m gonna say, ’cause I’m, but one way of framing the-
– [Zubin] Models are limited, yeah.
– [Daniel] The conservative, or traditional intuition is the idea that if there’s a social system that made it through the trials of evolutionary history, and there’s lots of them that failed, it probably has a lot of embedded wisdom that isn’t obvious. It probably made it through for a reason. So go back to the old, keep the old thing, whether it’s the Founding Fathers, or Christianity, or whatever the thing is, right, that you’re trying to conserve, that there might be reasons why it worked that we don’t even understand well, but that were, that it was tested, and you know, tried and true, and so-
– [Zubin] Proof is in the pudding-
– [Daniel] Of that.
– [Zubin] Yes, yes.
– [Daniel] The progressive intuition is we’re facing novel situations that we never faced, and that the things that worked in the past couldn’t possibly work for that, and innovation is needed. These are obviously both true, and need to be in dialectic. So the idea that either of those would be adequate is nonsense, ’cause if the new thing you’re doing doesn’t factor that most of the environment is still the same and the things that worked might work for reasons you don’t know, and you throw the traditional thing out too fast, then realize it was doing things you didn’t realize, and you just fucked up-
– [Zubin] Yeah.
– [Daniel] Right, so the progressive often doesn’t pay enough attention to the traditional impulse, and vice versa.
– [Zubin] 1,000%, and the way that’s intuitively felt by people who I think, a lot of people in my audience, and we use this term alt-middle, because it’s kind of a meme-y phrase, but it’s really not quite right. It’s a parody of alt-right, and it’s saying, “Hey, no, we’re actually looking for a synthesis position.” We’re saying, “Oh, there’s actually a higher order thinking “and complexity that’s needed to integrate the two sides,” because each side on its own is missing a big piece of the complexity. So how do we integrate that? And again, to have that standpoint actually, or at least have it aspirationally, means inhabiting each side, right? And again, I promise we’ll get back to the Anthropocene and how we, how the technology, and all that. But to be able to inhabit each side is necessary to emerge the thesis. Otherwise, you know, we’re stuck in this left-right sort of dichotomy, or a thesis, antithesis, or whatever subject you’re talking about, where you have two sides, or more than two sides.
– [Daniel] There’s a guy I know from the integral community, Clint Fuhs, who did a doctoral thesis that I found really interesting several years ago on this, in the field of developmental psychology what are higher stages of development, meaning people who can process more complexity with more nuance, and what are the things that correlate with that the most. And for a long time, there had been this idea that perspective-taking was the key to the correlation of that. People with kind of higher stages of hierarchical complexity could take more different perspectives, and the kind of nuance, if I’m understanding rightly what his work added, was it was actually not perspective-taking as much as perspective-seeking. It was actually an impetus to seek the perspectives, not simply the capacity to take them without doing it.
– [Zubin] Oh, interesting.
– [Daniel] And so-
– [Zubin] Intention, drive, motivation to actually see another’s perspective.
– [Daniel] And so, it’s interesting to think that the thing that would be the generator function of complexity of thought the most is the innate impulse to try to understand where people are coming from, and be able to see their worldview, and then be able to run parallax across lots of those. So perspective-seeking, then perspective-taking, then perspective analysis, can I error correct them, then perspective synthesis.
– [Zubin] Okay, so that’s the process. You know, it’s interesting, because I find that one of the biggest rushes I get is when somebody that I disagree with explains their position to me in a way that I deeply inhabit it, like I’m like, “Oh my gosh!” That’s like a, it’s like a dopamine rush in itself, like, “I understand, okay.” Now, I can actually, and I actually have that desire to understand it. But there are some spaces where I’m much more closed off to understanding other perspectives, like if I’m having an argument with, you know, a friend, or something else, there’s more of a block. So it’s not a one-size-fits-all, but in general, I think that that intention, that desire, that sort of little internal reward from getting to inhabit another perspective, in other words, the possibility of it leads to the intention to some degree.
– [Daniel] Okay, you just mentioned something that I think is real interesting, and we’ll go back to where we were earlier, which is you said you get a bit of a dopamine hit from understanding the view of someone that you hadn’t previously understood, where kind of the world opens up, something makes more sense. There’s like a reward circuit on increased clarity, or insight. The thing I would bring up here is that not all reward circuits are equally healthy, right? There are hijacked reward circuits that are mostly entropic, and then there are reward circuits that are mostly healthier, syntropic.
– [Zubin] So entropic meaning leading to disorder-
– [Daniel] Mostly degrade the system.
– [Zubin] Degrade the system.
– [Daniel] And so, basically, everything in the direction of addiction is kind of entropic reward circuit, or a kind of downgrading reward circuit, and almost all of the upgrading reward circuits don’t provide a easy spike of reward. They require some work, and then reward occurs, and this is the one marshmallow, two marshmallow thing.
– [Zubin] Mmm-hmm, explain that for folks.
– [Daniel] Very, very, very famous, classic, many times repeated study. There’s a lot of interesting nuance on the study, but basically, you put kids in a room, and the study is set up. They get one, there’s a marshmallow on the table. The researcher sets a clock for some amount of time, like 15 minutes, says, “If you wanna eat this marshmallow “at any time you can, “but then you only get that one marshmallow. “If you wait the whole 15 minutes, and you don’t eat it, “when I come back, I’ll give you two marshmallows.” Well, first, assessing they’d rather have two marshmallows than one. So it’s a impulse, a delayed impulse and impulse control kind of study. And it was found that the kids that ate, could wait and got two marshmallows succeeded more at almost all areas of life. The one-marshmallow kids had higher incarceration, did less well on their SATs, had more divorce rate, like all kinds of things, and that you could assess this starting at like age five. And so, it was pretty, almost bothersomely deterministic-
– [Zubin] Right.
– [Daniel] About how successful people would be based on delayed gratification. Can I avoid immediate gratification associated with long-term goals? ‘Cause then, it was basically an assessment of capacity for long-term orientation.
– [Zubin] Hmm.
– [Daniel] And intentionality. And so, there are two different reward circuits, right? There’s two different relationships to the nature of that reward circuit. And so, as you were mentioning, you can hear a point of view that you disagree with, where at first, you might actually feel more uncomfortable.
– [Zubin] Yeah.
– [Daniel] You might actually be squirming in your seat, like, “Fuck, maybe I’m wrong about everything.”
– [Zubin] Which is the worst feeling , right?
– [Daniel] And then you’re like, “Oh, wow, “I learned something,” and it’s very much like the reward circuit of exercising. Like at first, exercise doesn’t give you the hit that just eating the chocolate cake in the moment does. I get an instant hit of increased pleasure from the chocolate cake-
– [Zubin] Yeah.
– [Daniel] Whereas with working out, like I’m sweating, and it’s painful, and my muscles hurt, and whatever, but the baseline of how good I feel in my body as I keep doing that over time goes up. As I keep eating the chocolate cake regularly, my baseline goes down as I’m getting more inflammation, and obesity, and those types of things, right? So the chocolate cake reward circuit gives me an immediate spike and then a drop, and then the drop says now that I’m low blood sugar, ’cause the insulin overcompensated, now I’m craving more sugar. As that keeps going, my baseline erodes.
– [Zubin] It’s a downward spiral into entropy, yeah.
– [Daniel] With exercise, it’s the other one. The same is true, ’cause what you just said is when you listen to the point of view you disagree with long enough and earnestly enough that you understand it, you get a reward on the other side, but the reward actually increased the complexity and the accuracy of your thinking. There’s a reward circuit on the chocolate cake side of this one, which is the reward circuit on quick certainty.
– [Zubin] Being right.
– [Daniel] Yeah. Certainty and sanctimony, and obviously, this is what our information environment is optimized towards right now. The Facebook News Feed is gonna show things to people that will maximize their engagement and their stickiness. And most people, if you put a very long, nuanced research paper in front of them that has views that they don’t agree with, they’re just gonna bounce.
– [Zubin] Yeah.
– [Daniel] They’re only gonna stay engaged, because it makes them feel right, or pisses them off, or scares them. It does some kind of limbic hijack, and also in a short amount of bits, ’cause their attention span is bad, and the short amount of bits means inherently it won’t be complex, or nuanced.
– [Zubin] So I, okay, man, so much here. I don’t know how you managed to tie it all back to social media and addiction, but I’m gonna give you kudos for that as somebody who does interviews , ’cause I was like, “How are we ever gonna, ah.” So a couple things you said, limbic hijack. I just wanna let people know hijacking our lower sort of brain functions of that primitive paleolithic mind that is susceptible to that, that dopamine addiction. The addiction circuits are there, et cetera. And the idea that there’s two sides of that, and actually, this is a good idea to introduce this term of hormesis, which you and I were talking about offline, the idea that you have this addiction circuit where you get a reward, but then you drop to a baseline that’s below the original baseline, and then it’s a downward spiral, versus hormesis, which is more like exercise. Can you explain hormesis?
– [Daniel] Yeah. And I just wanna say, because you have an educated audience, we’re simplifying everything for model purpose and speed purpose. Obviously, it’s not just dopamine, right? You can have a limbic hijack that is more around fear, or outrage, and it’s epinephrine, and adrenaline, and cortisol, but it is also gonna create a certain kind of motivational process. And so, when we say dopamine reward circuit, we’re really simplifying some kind of quick motivational neurochemistry.
– [Zubin] 1,000%, and even the neurochemistry, there’s even epiphenomenon around neurochemicals, and glial cells, and all of that, yeah. So we’re making, yeah, again, all the models are models.
– [Daniel] Useful to start to help understand the thing, and then don’t idolize ’em.
– [Zubin] Yes.
– [Daniel] Okay, hormesis, so the idea with exercise that if you’re moving an amount of weight that is really, really easy, you’re probably not gonna get that much muscle growth, because there’s no reason for a bunch of muscle growth. Muscle growth, like you’re gonna have to be able to consume more food to support those muscles. There’s energy involved in the anabolism there. So the body’s only gonna do it if it has to, and it has to because you’re stressing its current capacity. So it’s when you are at the edge of your adaptive capacity that the system says, “Oh, we might actually not be safe “if we don’t increase adaptive capacity, “’cause we’re having to engage in situations “that need all of the capacity we currently have. “Let’s generate more,” right? So hormesis is the property of systems to upregulate their capacity and response to stress that is near the boundary of what they can handle. And exercise is a classic example, but that’s why you do hot-cold therapy is, one of the reasons for hot-cold therapy is you’re creating a kind of metabolic hormesis, and as you’re exposing yourself to more variance of temperature, the body actually regulates. It has an impetus to regulate its own temperature, which means more kind of metabolic flexibility. But hormesis in general, like the two-marshmallow reward circuits are usually more hormetic, right? They usually involve stressing your current capacity, and then you get a reward on new capacity. But obviously, the addictive things generally don’t require you stressing your capacity. They give you some kind of hit within the current capacity limits, and usually in ways that downgrade the capacity levels-
– [Zubin] Overall, yeah.
– [Daniel] So there’s a cognitive hormesis. There’s an emotional hormesis. There’s spiritual hormesis, right? So the cognitive hormesis, like if weightlifting, or sprinting, or whatever it is, is gonna be like a physiological hormesis, the cognitive one is you’re going to feel some kind of stress, pain associated with being at the edge of your attention span. Like if I’m reading text, and my mind’s wandering, it’s the ability to keep fucking focusing my mind on purpose. It is growing attention span as a, like a muscular capacity.
– [Zubin] Yeah.
– [Daniel] But I don’t get it if I’m not actually working at it, and if I’m not at the limit of my attention, I’m not gonna grow my attention-
– [Zubin] It won’t push, yeah. It won’t push through. So this idea that in a way, when we’re seeking other opinions, or we’re trying to use an integral perspective, an alt-middle perspective, to try to synthesize, we’re pushing ourselves through a discomfort, not to the point of failure, where we’ve injured the system, but to the point where the system goes, “Wait, humans are anti-fragile,” which we’ll probably talk about, where that kind of stress actually does create that hormetic effect, where the system then responds by getting more capacity.
– [Daniel] And then there’s a reward circuit on that that is fundamentally a healthier, more aligned one, both healthier for the being, and healthier for the way that being can relate with the world.
– [Zubin] Exactly, so it’s actually a virtuous reward system as opposed to the addictive reward system. So how then, now we have this human mind that’s, you know, paleolithic, and it’s now put in a world where we have a different technology that is relying on reward circuits that maybe are not as virtuous as we would like. Is that where we are now in this conversation?
– [Daniel] I would argue that capitalism can exploit one-marshmallow processes better than two-marshmallow processes.
– [Zubin] Interesting. So are you some kind of communist? I’m just gonna establish this now.
– [Daniel] No, the critiques of, I mean, capitalism doesn’t mean one thing. For some people, they think about that as any system primarily based on private property ownership, which would’ve included feudalism, and a different version, whatever. And obviously, Adam Smith capitalism and Milton Friedman capitalism are not the same kinds of structures, so with and without a central bank, with and without AI high-speed trading of complex financial instruments. They’re totally different structures. I would say all versions of it are inadequate for a long-term viable system, but so are all versions of communism, socialism, and other economic systems as we’ve proposed them, so-
– [Zubin] Perfect.
– [Daniel] There’s a lot to learn from all of them. There’s a lot about theory of markets that’s important, but the long-term system, like what isn’t, what do we have to think about in terms of economics for making it through all of the catastrophic risks the world faces? And if it’s not obvious, briefly, what the catastrophic risks are, you have all the environmental risks that are the result of the cumulative effects of industrialization and globalization. And so, dead zones in oceans, overfishing, biodiversity law, species extinction, topsoil erosion, climate change, blah, blah, blah, all of those things, peak nitrogen, phosphorous, whatever, all of those are the result of being able to extract resources from the world much faster than they can replenish themselves, and turn them into waste much faster than the earth can process them, i.e. a linear materials economy running on a finite planet that is bound to a monetary system that has a need for exponential growth to keep up with interest. And so, the exponential growth of the monetary system forces an exponential growth of depletion on one side and pollution on the other side. You get all the planetary boundary issues. So that’s one set of things, and obviously, there’s an economic driver associated with all of that, right? We have to change economics to be able to make sure that the social sphere and technosphere are compatible with the biosphere, right? Now, the social sphere-technosphere combo is debasing the biosphere they depend on.
– [Zubin] And you can probably point at the root of that being the primary economic drivers now are one-marshmallow drivers that do not necessarily promote two-marshmallow, delayed, longer term thinking when it comes to those planetary boundaries. So in other words, if I don’t go out and fish the oceans, another country will, if I, like sort of tragedy of the commons, like if I don’t mine that particular ore, some other company will come, and knock the top of that mountain off, and pollute the rivers, and so on. And it’s all in the service of the particular economic model, that is, you’re trying to generate revenue, and those things are rewarded in the current system.
– [Daniel] Yeah, and you know, the economic system creates a discount rate on future value, one, ’cause you can’t predict it fully, but two, the current value gives me the ability to invest that capital, and make compounding interest, or other kinds of financial services investments with it. It also gives me increased optionality in a changing environment. And so, not knowing what the environment will hold, and wanting to do the best I can, I want the most choice tokens, right? And the dollar is a choice token. It’s the ability to, with very high optionality and high liquidity, do whatever would be adaptive, whereas if I have a bunch of farmland, and the thing that I want in the moment isn’t farmland, selling it is gonna be, take a while. If I have a bunch of mining rights, or I have a bunch of timber, or whatever it is, and especially if I have a bunch of trees that aren’t yet timber, and then I decide that I want to turn them into capital for some purpose, there’s a long lag time. So there’s a game theory optimization towards more optionality, which means that the thing that has no real value, right, the dollar is purely representational value, but with maximum liquidity and optionality. I don’t want the things with real value. I want to convert things with real value into the things with the only fictitious value, but that maximizes my optionality. And that’s very much a short-term interest multiplied by a competitive collective action problem that is just where each agent making the choice that is good for them in the short-term is, creates a collective making maximally bad choices for the whole for the long-term.
– [Zubin] And that in itself, and this idea that cash is king for that reason, the idea that those long-term choices are the potential civilization-level risks that we face, whether it’s environmental, whether it’s technological, whether it’s national defense, whether it’s nuclear war, all the things that you talk about. And I think, so going back to the capitalism is a one-marshmallow sort of optimizer, how does that then relate to where we are technologically, say, with one civilization-level threat, which is big tech, social media attention hijack?
– [Daniel] Yeah, I mean, it’s like something that anyone one would learn in the beginning of their business career that every business wants to optimize the lifetime revenue of a customer. And addiction is a really good way to optimize lifetime revenue of a customer, right?
– [Zubin] Every hustler on the street knows this.
– [Daniel] And so, I want, like the best business will apply to the most number of people, and have the most need for continuous purchases. And so, it’s like, it’s hard to beat fast food, right? It’s hard to, like we can start when they’re young. We can make it apply to almost everybody, and have it be a daily point of purchase for forever. And that’s why McDonald’s became very big, and Nestle, and you know, whatever, and Coca-Cola, and all those. But even that is actually dwarfed by social media, because obviously, a kid can start with an iPad before they can talk.
– [Zubin] Yeah.
– [Daniel] And start getting conditioned to hypernormal stimuli and customized environments to them. So we can start very young. The most time someone gets a Coke is still less than they check their phone, right? And so, the total number of points of contact, and it’s optimizing for hypernormal stimuli across lots of vectors, right? It’s getting your-
– [Zubin] News-
– [Daniel] Media outrage. It’s helping, you know, sex appeal, food that you’re interested in, stuff like, and-
– [Zubin] Social credit.
– [Daniel] And it’s personalized to you with AI optimization.
– [Zubin] Right.
– [Daniel] And so, your newsfeed is like if I got to test every different version of salt, fat, sugar, artificial flavor combinations that maximize addictiveness to you, and Hostess could do that for each person, that’s what social media is for the newsfeed.
– [Zubin] That’s terrifying, but it’s absolutely true. Anyone who’s seen the ads that they get directed, anyone’s seen YouTube’s, YouTube’s algorithm I’ve been noticing quite a bit lately, because it is supernatural at reading my mind. It knows how to connect me to people that I’m interested in. It knows, interestingly, here’s a scary thing as a creator, it knows how to connect me to people who are making videos that create FOMO for me as a creator. “Oh my God, how did that guy interview so-and-so? “I’ve been trying to get that guy on my show forever. “How did, Daniel Schmachtenberger? “How’d he get Daniel Schmachtenberger? “Rogan, you loser!” Like I’m like, “Put ’em up!” And it actually knows me in a way better than I know myself. It’s almost hacked into my brain stem, and reading that, and it’s able to monetize that.
– [Daniel] And so, if it wasn’t doing that for a fundamentally extractive purpose, it’d be a pretty cool tech.
– [Zubin] It’d be pretty amazing if it could do something good.
– [Daniel] If you could configure it to do that in a developmental direction.
– [Zubin] Ah.
– [Daniel] Right, if you had some controls over it. And so, there’s a category of tech that I would say, there’s a few categories of tech that are distinct from previous categories. So for instance, autopoietic tech, self-replicating tech, is different than non-self-replicating tech. If I, and this is one of the reasons that so many people have gotten really concerned about bio recently, whether it’s synthetic bio, or CRISPR, or whatever, is because as fucked up as nukes are, they don’t make more nukes. They don’t automatically perpetuate more nukes, unless you have a mutually assured destruction type scenario, which is the deterrent to not use it. But if you release even accidentally a living self-replicating thing, you’ve initiated a cascade phenomena that is way harder to predict what it’s gonna do, or ever stop. And we know it happens when you introduce an invasive species to an environment, where the environment didn’t develop natural resilience to whatever that thing does. It can really fuck up an environment. But what about if you release an invasive species that is more different than anything else was, ’cause it didn’t even arise through evolutionary process?
– [Zubin] Yeah.
– [Daniel] Well, it might just die off, ’cause it’s not that hardy, but if it has faster feedback loops, that could be destructive to everything very quickly. So biotech the soonest, but then nanotech and AI all have these kind of self-replicating properties. That needs to be its own risk category that’s separate from everything else, because the ability to do risk calculus on it is so hard, and the ability to ever do post facto mitigation is so hard. There’s another category, but you can say something about that-
– [Zubin] No, I was just gonna say you reach the end of your ability to model those things. They reach a kind of singularity, where they’re doing things that you could never have predicted, because the emergent properties of the system are way more complex than any model you have to model it, and that’s what’s scary about it. That’s actually the argument people make about GMOs, and things like that. And I’m not sure, you know, I don’t think it’s necessarily applicable to most GMOs, but in theory, it certainly is.
– [Daniel] The risk-reward calculus, and the risk tolerance has to change under really catastrophic risk possibility, even if it’s very low possibility, very low probability, but you’re doing something that’s gonna have a lot of total numbers of things with that low probability-
– [Zubin] Right.
– [Daniel] What tolerance do we have for any probability of something that’s totally existential?
– [Zubin] Yeah.
– [Daniel] And yet, there’s a perverse game theory in the incentive, which is whoever’s oriented to the opportunity will do better than whoever’s more oriented to mitigate against the risk, right? Because if I say, “Eh, the risk isn’t that high. “You guys are scared of a bunch of nothing. “You’re being hyperbolic. “Look at the opportunity. “We can,” blah, blah, blah, “cure cancer, “connect the world,” blah, whatever the thing that the exponential tech can do in that positive application is, they’re gonna go get the market share, get the money, whatever, and then the harms are gonna be socialized. They’re not gonna be held accountable fundamentally, or they’ll figure out how to bankrupt it, and restart, or whatever it is that is the legal strategies a big company with limited liability can use and good lawyers. And so, they have the incentive. You have a lot of incentive on opportunity, and that’s the kind of Silicon Valley move fast and break things motto.
– [Zubin] Yup.
– [Daniel] And so, you privatize the gain, socialize the losses, awesome. Now, if your focuses do really good anticipation of second and third-order effects, identify wherever there might be an externality, figure out how to internalize it, you’re just gonna go so fuckin’ slow, and spend so much research money that you just lose.
– [Zubin] You’ll go broke.
– [Daniel] Yeah.
– [Zubin] Yeah.
– [Daniel] So unfortunately, again, there is a perverse incentive to do the riskier versions, pretend they’re less risky, only do bullshit versions of risk analysis for plausible deniability that you did it.
– [Zubin] Wait, why does this sound like everything Zuckerberg’s doing right now?
– [Daniel] You can pick him, but you can pick others too-
– [Zubin] I’ll pick anybody, I’ll pick anybody. But so, this is interesting. Have you, I imagine you’ve seen the movie “Don’t Look Up,” and-
– [Daniel] I haven’t actually seen it, but I know-
– [Zubin] You know the premise.
– [Daniel] Yeah.
– [Zubin] So everything that you’re pointing to, everything that, and honestly, I watched the movie, and I don’t see it as a, you know, climate change thing. I saw it as a COVID thing, ’cause you frame it however you wanna frame it-
– [Daniel] Any catastrophic risk.
– [Zubin] Any catastrophic risk, and COVID isn’t really a catastrophic risk, but I saw it framed as the science versus, thesis-antithesis kind of thing. It was so depressing to watch because it held up a mirror exactly to these kind of things, like the short-term gain issue. They wanna mine this asteroid, instead of destroy it. So the equivalent of Zuckerberg decides he’s gonna do this, and I mean, it was crazy, and I was watching it going, “No, that’s exactly what would happen.” That’s exactly what would happen with our current thing, and the social media was deployed tribalizing people. There’s a meaning crisis and there’s a sense crisis, like what even is real? Is the asteroid even there? Hence the title of the movie, “Don’t Look Up.” Well, don’t look up, or you’ll see what’s actually there. It’s right there.
– [Daniel] Yeah, so I mean, this is also important, obviously. We live in a reality where most of the stuff that feels most consequential that we believe we have no first-person sensing of.
– [Zubin] Right.
– [Daniel] We are getting it mediated through a 2D screen, through other people’s thinking, and other institutions, and you know, those types of things. That’s significant, right? It’s significant to people’s ability to have collective intelligence systems work. There’s a book, I think it was called “The Politics of the Invisible” that was looking at regulation issues after Chernobyl, where, you know, should farmers be allowed to grow food here and sell it that the countries had to deal with downwind of where Chernobyl was given that the uranium was invisible. And for the first time ever, not the first time ever, but like for the first time it became really obvious that there was totally invisible stuff that was totally consequential. So the farmers and the layperson had to trust those who had the ability with the Geiger counters, and whatever, to tell us stuff that only the priestly class that understood how to do that could do.
– [Zubin] Oh.
– [Daniel] Because now, we’re engineering in the invisible in a really fundamental way.
– [Zubin] Which we’re not designed to deal with as humans-
– [Daniel] We don’t have the ability to all check and balance and go through the same epistemic process. So unless you happen to have the Geiger counters, and the knowledge of nuclear physics, and et cetera, and then the biophysics to say, well, how much increased radiation of what type is gonna create how much mutation to then be able to weigh against the difficulty of the farmers not losing their jobs, and those types of things.
– [Zubin] Yeah.
– [Daniel] And so, having created a very scientifically advanced society also means that then you need a lot of scientific insight to weigh in on the policy things. But that also means now people have to just trust a priestly class. And, or they all have to be adequately educated, and have access to all of those tools, and that, like adequately educated about virology, and nuclear physics, and epidemiology, and climate, and et cetera-
– [Zubin] Not possible.
– [Daniel] So you start to reach information limits. So then you have to say, “Well, fuck it, “we need institutions that are trustworthy “that can do that.” Well, how do you get trust? How do you get everybody to be able to trust institutions? Even if you had an institution emerge that everyone trusted, ’cause it had some transparent way of being able to show real good epistemic process, and lack of vested interest, and rigor, and checks and balances on power, and it became a kind of legitimate authority, there’s so much power to being a legitimate authority, to being able to be the arbiter of what is real that everyone who wants to win at the game of power will have a maximum incentive to try to corrupt that thing.
– [Zubin] Hmm.
– [Daniel] And so, then who wins? The scientists, and mathematicians, and philosophers focused on the thing, or the best players at the game of power, who are funding the thing, who have a maximum incentive to start moving it in one particular direction? So how do you maintain legitimate authority and have it not get captured? It’s a real tricky thing.
– [Zubin] Why does this feel so familiar, Daniel, to a certain experience we’ve had over the past two years, where there are factions, who just, there’s no way to even make sense, and anybody can hijack the mantle of authority, including the mainstream, including the antithesis, including it doesn’t matter, put some letters after your name, and say some stuff. And how is an average person without the education in virology, epidemiology, biostats, all the other stuff supposed to make sense of it? And this has been a fundamental problem.
– [Daniel] Well, I’ll just say it live on your show. This is why, since you do such a good job of steel manning the various points of view, being able to look at synthesis and antithesis with regard to COVID, I mean, thesis, antithesis, I think it would be fantastic for you to get leading earnest thinkers on both sides of the lab leak hypothesis, the vaccine mandate, the small molecule approach, all of the major issues, and be able to facilitate a structured dialectic that identified what they agreed on, where they disagreed, what the basis of that disagreement was, what it would take to resolve it that used really, that oriented away from rhetoric and towards good formal methodology, and that everybody got to watch and see. I think that would be amazing, and I think you’re the right guy to do it.
– [Zubin] How dare you, number one, okay? Number two, what you’re describing is a solution, is one solution to this sense-making crisis, which I think I feel very strongly about as well, this idea, and you used a term that I think we should explain, steel manning. You said steel manning. So it probably behooves us to understand what a straw man is, and then we can talk about a steel man. So very quickly, a straw man is where you take an opponent’s argument, and you actually distort it in a way that you, and you can correct me if I’m explaining this wrong, and you put it out there, and go, “Well, see, what they’re saying is “that they wanna kill all kids by not vaccinating them.” That’s called a straw man, ’cause you’re building the straw man, then you attack it, ’cause it’s easy to attack, right? So it’s misunderstanding, or purposefully misrepresenting what they’re saying to create a straw man. A steel man, on the other hand, is actually deeply re-conveying what the argument is in very clear terms to where even the people who believe this argument are like, “Yeah, you nailed it, that’s what we’re saying.”
– [Daniel] Yeah. Straw man’s where you are obviously just putting them down and being pejorative, appeal to your own base, but that’s about it. The worst straw mans are the ones that look like they’re steel manning, but they aren’t, right? You say, “Well, it’s understandable why such-and-such, “and why they would think this, “and they care about this and this,” and it seems like you’re empathizing, but it’s a faux empathy, ’cause what you’re doing is setting up what looks like you empathize, so that you’re setting up what looks like a solid understanding of the thesis, so that then when you critique and take it down, people will assume that you understand it well, except you constructed it in a way that looked earnest, wasn’t, and had weakness in it, right? So the faux steel man straw man is the most common dreadful one.
– [Zubin] Uh-huh.
– [Daniel] Real steel manning is where someone who has actual expertise and care on the other side would say, “I have nothing to add.”
– [Zubin] Yeah.
– [Daniel] Like you actually comprehensively did the thing.
– [Zubin] You got it good, yeah, and that’s important, because first of all, then it eliminates the straw man effect where you’re deliberately weakening the opponent’s argument by misrepresenting it, and ad hominem-ing, and all of the other stuff that you might throw into the straw man cocktail. And it shows that you actually understand what they’re saying, which means you already have some degree of empathy with what they’re doing, which is important, I think.
– [Daniel] Yeah, I mean, it does several things. It communicates enough respect for another fellow human being, or tribe, or group of people that you-
– [Zubin] Assumes good intent, too, probably-
– [Daniel] You took the time to see what they were saying, and it shows that what you say doesn’t need to be dismissed instantly, as you didn’t get where they were coming from first.
– [Zubin] Right, which is important, yeah.
– [Daniel] And this is one, like democracy requires this process. Democracy, like modern democracies, the US, and other kind of modern European democracies, emerged out out of kind of European Enlightenment, cultural enlightenment, and this is why the founders in the US said, “This system only works “with a comprehensively educated citizenry.” And George Washington said, “It’s the primary aim of the federal government to ensure “the comprehensive education in the science of government “of every single citizen,” science of government meaning everything you would need to understand to know how to do collective choice-making well. So the difficulties of game theory and vested interest, economics, history, all those things, comprehensive education of every citizen in science of government is the number one goal of the federal government, because if the number one goal is rule of law, you can do it with a police state. If the number one goal is to protect the boundaries, you can do it with a military dictatorship. If you have the number one goal of being anything other than the comprehensive education of every single citizen, it won’t be a democracy, ’cause democracy’s very fragile, and requires that kind of thing, and it requires not only that everyone can make sense of third-person reality together, apply the philosophy of science, and kind of natural philosophy, but that they can make sense of each other’s positions and value systems well, which is the good faith discourse, the empathy, the Hegelian dialectic. Can I really argue your position well? Without that, democracy has no chance. And so, when people have made their fellow countrymen the primary enemy, they are actually, in the short-term, they think they’re trying to win a battle about COVID, or climate change, or systemic racism, or whatever the thing is. What they’re actually doing is waging a war against democratic process writ large, where as you’re making your primary countrymen the primary enemy, where all of the energy within that system goes to infighting, and then you elect more polarized representatives, who create more gridlock, authoritarian countries are just gonna do better geopolitically across every way. And then you’re actually, what you’re really fighting for is the authoritarian process to have more global influence over the 21st century.
– [Zubin] Yeah. We’re fighting for China, basically.
– [Daniel] So really, the democratic process requires an epistemic commons, shared sense-making, and shared meaning-making to inform shared choice-making. Democracy is a shared choice-making process. But that means that we have to have some process to talk about what is-
– [Zubin] Right, sense-making, right.
– [Daniel] And some process to talk about what do we want, and I have to be able to understand the thing that you want and understand that just fucking you with the thing, you don’t stop being a political actor, and then you’re gonna act even harder next time. We’re just gonna escalate arms race.
– [Zubin] Yeah.
– [Daniel] So how do we make compromises that address what everybody wants?
– [Zubin] Yeah.
– [Daniel] And so, the bad faith argumentation destroys the social contract, and the epistemic commons, destroys democracy. Good faith argumentation is not sufficient, but is necessary. And so, the steel man is the beginning of a good faith argument.
– [Zubin] Brilliant, and I’ll say this. If you look at any comment on YouTube on, say, any of my videos, you can find accusations of bad faith, accusations of capture, accusations of all kinds of straw manning of my arguments, and things like that, and this is a common, this is the default comment that will rise in the rankings in terms of likes on a YouTube algorithm, or on a Facebook algorithm. And so, this then dovetails nicely into how is social media then kind of weaponizing this process in a way that maybe wasn’t intentional, but is now the epiphenomenon of everything from the incentives that set it up to the algorithms that now run it to our primitive brains that are being addicted to it? How then is it creating civilization-level risks in us, including the simple question of are we generating a ton of, like, we talked about this offline, a ton of subclinical mental illness in people, you know? Like what are we doing?
– [Daniel] Okay, we’ve got like at least three really important directions to go here, the subclinical mental illness, or kind of like ubiquitous psychopathology of the whole population that is not innate to being human, but a epiphenomena of the civilization design, and honestly ubiquitous subclinical psychopathology and physiopathology, both, right?
– [Zubin] Mental and physical illness.
– [Daniel] Yeah.
– [Zubin] And when you say subclinical, meaning it hasn’t quite reached the threshold-
– [Daniel] Diagnosable.
– [Zubin] Being diagnosed-
– [Daniel] But in the direction of.
– [Zubin] In the direction of, and the other thing you said, just to tie it in, is that the Anthropocene, this era of this technology feeding back to us, is generating this. This is not something innate to us.
– [Daniel] Yeah.
– [Zubin] It’s a feedback with this technology-
– [Daniel] Okay, so that was one thing. The other is what does social media do that is intrinsically destructive to the social contract, the epistemic commons, democracy itself, and maybe other stuff?
– [Zubin] Mmm-hmm.
– [Daniel] And that gets to, I was starting to give the example of there’s a unique category of tech, which is self-replicating tech, just to basically say there’s a few novel categories of tech that exponential tech makes. The one I wanted to get into was social media, why it’s a unique category of tech, so I’ll add that in there.
– [Zubin] Let’s do that, uh-huh.
– [Daniel] But then the third thing that you brought up, which was why do those comments that are cynical on you emerge the most, let me do that one real quick.
– [Zubin] Yeah, let’s do that. By the way, I never write shit down when I’m interviewing, but with you, it’s like we’re doing all this stuff, I have to write it down, or we’ll never come back to it.
– [Daniel] They’re all good, and they’re all interconnected. We could take any of the directions to begin with-
– [Zubin] Perfect, yeah, hit me with the cynical comment piece.
– [Daniel] So kind of drawing on the pre/trans framework that Ken Wilber popularized, I won’t even bother constructing that right now. People can look it up. I’ll just say-
– [Zubin] I’ve talked about it before, too, yeah.
– [Daniel] One kind of model of the development people go through in terms of game theory, like their own intuition of game theory, you could call naive, cynical, post-cynical as a developmental process in the direction of increased capacity to do citizenry. Naive is where you assume good intent on other people’s parts as a kind of default. You assume that what people are saying is generally what they believe, and that other people’s interior experience is probably a lot like yours, and that where fucked up things are happening is probably mostly because of mistakes, and things like that. And if you interact with the world, and particularly systems of power enough, that will become destroyed in you, right? And then the naivete will crash upon the rocks of reality, and then you’ll move into a cynical place, where you realize how much ladder-climbing, how much of virtue is virtue signaling, how much of communication as a whole is strategic, where someone is saying something for the effect they want it to have, not just for what they maximally believe, how much of the epistemic commons is the result of everyone’s agentic desire to make other people believe stuff, rather than just what is true, all those types of things, right-
– [Zubin] And by the way-
– [Daniel] Get cynical.
– [Zubin] And when you say epistemic commons, you’re talking about the where, how we even get information by epistemic, yeah.
– [Daniel] Yeah. We could call it the knowledge commons, which is just the available information that we can all look at.
– [Zubin] Yeah.
– [Daniel] But it’s not just the knowledge, because the methodology that produced it, right, the epistemic process, is gonna be key to whether or not I believe it, how I factor it, all those types of things, and then what do I do with that? There’s an epistemic process of how do I factor that data to actually come up with forecasts, and conclusions. And so, the data is not that interesting by itself. The entire epistemic commons, how do we collectively make sense of the world together?
– [Zubin] Got it, got it, so you had the naive, then you had the cynical.
– [Daniel] Yeah, now, in cynical, so you read Machiavelli, and you get a good description of the cynical, and that that type of court dynamic is actually playing out. And typically around any center of power, there’s something like court dynamics. Around any very wealthy person, you watch in their social circle, and there’s something like court dynamics.
– [Zubin] Yes, we’ve all seen it. We see it at CDC, we see it at FDA, we see it every big institution.
– [Daniel] And so, then you end up getting situations, where whoever is in the position of most power will get a distortion bubble of the type of feedback they get because people want something from them, so there are certain types of disagreements that won’t happen. So then you’ll get the propagation of their own biases, and other people confirming it, and all kind of nonsense like that, right?
– [Zubin] The Steve Jobs reality distortion field type of deal. Actually, that’s-
– [Daniel] There’s different kinds of reality distortion. This is one kind of vested interest reality distortion, and if you want an introduction to how to think cynically, “The 48 Laws of Power” is probably the best you can get.
– [Zubin] “48 Laws of Power,” who’s that by?
– [Daniel] Robert Greene.
– [Zubin] Got it, okay.
– [Daniel] And he’s taking Machiavelli, but also Sun Tzu, and von Clausewitz, and most of the kind of strategic thinkers, and giving a summary of how does the game of power work.
– [Zubin] Yeah.
– [Daniel] And… And you know you’re in the cynical if you say something like, “If you don’t know who the sucker in the room is, it’s you.”
– [Zubin] Yeah.
– [Daniel] Right? Because you assume that the game of power is happening everywhere, there is some suckers, they’re getting taken advantage of, et cetera.
– [Zubin] Yeah.
– [Daniel] I think the key is that there is such a thing as post-cynical, and the cynical would say, “The only thing that is not cynical is naive,” because typically, things don’t even have an awareness of a developmental stage that is beyond them-
– [Zubin] That’s right-
– [Daniel] So that’s where you get mistakes-
– [Zubin] Blindness, yeah.
– [Daniel] So the-post-cynical says it is possible to authentically care about something more than just self-interest in a way that you’re actually willing to make real sacrifice for, and that they’re are other people who also can, and that authentic relationships of trust can be formed.
– [Zubin] Hmm.
– [Daniel] So I couldn’t be post-cynical if I didn’t believe that there were other people that could be post-cynical. Right, ’cause I have to believe I can have-
– [Zubin] They need to be there.
– [Daniel] Of trust.
– [Zubin] If everybody else is cynical, you can’t, there’s no trust. If everybody else is naive, there’s no, you can’t do it.
– [Daniel] If they’re just naive-
– [Zubin] Yeah.
– [Daniel] I can’t trust ’em that much, because on accident, they’ll get played by the cynical people, and even though they’re trying to be loyal, the whole thing will still get fucked up-
– [Zubin] It’ll fall apart-
– [Daniel] I have to know that the other people have an immunity to that process, that they’re aware of it, but they are also not bound to it. This is in the world, but not of it, right?
– [Zubin] Ah.
– [Daniel] And so, to be post-cynical, you have to know that other people can be post-cynical, and then you have to be able to know how, what is a legitimate basis of trust, and how do I know that? How do I sense that? And so, what I would say is post-cynical good faith communication is necessary for something like the continuation of anything that’s at all democratic, or republic. Anything that has participatory governance and is not just the people being ruled requires the people being able to develop that. So that has to become the center of the values of a culture that then develops that in people and checks it and reinforces it.
– [Zubin] So does a post-cynical, this is fascinating. Does a post-cynical standpoint require the sort of transcending and including the naive and the cynical standpoint?
– [Daniel] Of course.
– [Zubin] So you have to be able to understand and inhabit the naive and the cynical to develop a post-cynical.
– [Daniel] In the same way that a metamodern, or an integral, or a post-postmodern system actually has to understand postmodernist critiques on modernity well, and then also understand modernity well, right? You have to understand the philosophy of science-
– [Zubin] Yeah.
– [Daniel] And the Hegelian dialectic. You have to understand the postmodern critique of why there is no objective fact.
– [Zubin] Right.
– [Daniel] And why there is no objective absolute perspective, and claiming it’s probably some type of imperialism, or power game in that you can’t separate game of power from the propagation of ideas.
– [Zubin] Yeah.
– [Daniel] And then not just leave it there, ’cause that’s an untenable place to leave it, and say how do we take that critique as a refinement, ’cause that’s a deconstruction process. How do we take that deconstruction as a refinement to a constructive process, which means everybody can get on the same page with something other than the game of power, ’cause if all there is is the game of power, and all truth claims have no basis other than claimant power, then what you’re actually doing is exalting power as the only possible thing, and that thing actually self-terminates, right? A situation where everyone is just playing the game of power, where that means that the epistemic commons and the social contract break, democracy breaks, you can’t sense-make climate change well enough to do any solution with it that half the population doesn’t resist, or COVID, or AI, or anything. And so, you have to come to a way of being able to make sense of the world, address values, and cooperate beyond just the game of power, which you know, kind of the metamodern thing. So the same with the post-cynical, right? The post-cynical understands why the cynical is there, and has to be resilient to that. So let’s go ahead, and see where, say, a left, or woke perspective would say good faith just means you’re a naive sucker, because let’s say we’re talking to a Native American about the history of the US keeping its agreements with Native Americans. And they’re like, “Okay, well, good faith communication “just continues to privilege the current power system, “because it just says let’s all be polite, and civil, “and whatever, and not address the fact “that the power system came the way it was “by some people totally fucking other people, “and there’s no real push for them to change it, “and the victors rarely just give that up, “and every time we made the agreement, “then they fuckin’ violated the agreement.” And the black community could say the same thing. If you go and look at the history of like, oh, Emancipation Proclamation, but then fuckin’ Jim Crow, and peonage, and you’re like okay. So at some point, oh, I see, your argument about being good faith is just gaslighting. You’re not gonna be in good faith. You’re gonna pretend. You’re gonna ask us to be in good faith, and then just fuck us-
– [Zubin] And you’re gonna screw us.
– [Daniel] Yeah.
– [Zubin] Oh, you see this play out on social media all the time-
– [Daniel] In which case cynicism then seems like the only informed thing.
– [Zubin] Right.
– [Daniel] And so, you have to have some process to be able to say how do I show up to a good faith process and really sense if the other side is, and ensure they are, and orient towards that simultaneously, ’cause otherwise, we get defect-defect in the prisoner’s dilemma. How do we get something to ensure against defect-defect and be able to have cooperate-cooperate, because everybody rationally doing defect-defect leads to catastrophe for everybody.
– [Zubin] Yeah, yeah, and that prisoner’s dilemma we’ll talk about another time, but the basic idea is that this is game theory, and this, so this is fascinating, because again, all these pieces that we’re talking about are actually, and I’ve never really fragmented them into these little bits that are actually very helpful to make explicit. This post-cynical idea of discourse, it is central to what I call alt-middle, which is really, again, it’s integral. It’s an integral perspective. It’s saying okay, how are we gonna make sense? How are we gonna have dialogue? How are we gonna try to find our way through the morass of what is, and what ought to be if we can’t even agree that we’re acting in good faith and that everybody’s accusing everybody else of bad faith, right? So well, and I’m just, so, again, going, boy, I almost feel like we have to go back. We should go and do one of these other two, because otherwise, we could do a whole show on just that. Do you wanna do subclinical mental illness here, or do you wanna do social media as a destructive element of, they kind of feed into each other a little bit-
– [Daniel] Yeah, you pick.
– [Zubin] All right, let’s talk about subclinical mental illness, because I’m obsessed with this idea. You and I were talking about it before, and I was just like, “Yeah, we need to talk about this,” because this idea that something in our Anthropocene, and what we’re doing technologically, sociologically, environmentally, et cetera, is generating a pandemic of multiple things, narcissism, anxiety, depression, and not all of this is being diagnosed, so we call it subclinical, but some of it is, emotional repression. There’s a whole web of things that’s happening that is fed back onto, and potentially in a causative loop with the very technology that we’re talking about here. I’m curious how you think about this, because again, we see it in, say, primary care. There’s a saying that like, “Hey, 50% of what we see is supratentorial,” above the tentorium of the brain in the cortex, in other words, it’s, everything is biopsychosocial, but this is particularly psychosocial, and whether it’s depression, or whatever, and it’s manifesting as a physical illness, because everything is interrelated. Everything is biopsychosocial, so I’m curious how you think about this.
– [Daniel] Well, humans are very adaptive. Biology in general is adaptive. Humans are particularly adaptive. It’s actually, the thing we were saying about humans’ ability to do tech co-evolved with how adaptive we are in a way that’s an important point to just take a moment on. Other animals are shaped by their environment, right? A particular mutation either is more advantageous, or not in that environment. The longer neck is useful if there’s leaves higher up, or whatever it is. So the animal’s really formed by its environment. Since the tool-making process, we were gonna be changing our environments, right? We could both move environment, ’cause we could put the tool of clothes on, and go to a cold place, or whatever it was, and we would start making dwelling units, and other things, change our environment. We couldn’t be too genetically adapted to a particular environment, since we’re gonna be continuously changing it. And so, this is one argument on why humans are neotenous for so long, right, why we’re still kind of helpless and embryonic for so long compared to any other animal, is, you know, it takes us a year before we’re even walking, and it takes a horse five to 20 minutes to be walking, and there’s no way horses would survive taking that long. It takes a humongous amount of parental attention to keep these little things alive for so long. Like why are we helpless and useless for so long? Because there’s less inbuilt code on how to be what we are because we’re gonna be exposed to new environments where we have to actually be soft-wired rather than hard-wired to the new environment we created. So we’re gonna be able to learn whatever language is here, we’re gonna be able to learn how to use these tools. The most fit person in the paleolithic era at being able to throw spears is not that good in a time where texting and other shit is more useful, right?
– [Zubin] So this hardware, this hardware of humans that starts out so neotenous, meaning we’re babyish for so long.
– [Daniel] Yeah, so we’re saying, so one of the things that’s unique to us is that our genetics selected for memetics, right?
– [Zubin] Meaning mimicry.
– [Daniel] Meaning mimicry, meaning-
– [Zubin] Not like meme like the memes we think about, M-I-M-E, yeah?
– [Daniel] I do mean the memes we think about-
– [Zubin] Oh, okay, okay-
– [Daniel] The memes we think about online are an expression of, online, the idea of a meme, the way Dawkins coined it in “The Selfish Gene,” that what a gene is to biology, which is a little piece of code that codes a particular behavior and replicates itself, a meme is to consciousness, or culture a little unit of code that codes a behavior, and seeks to replicate itself. So beliefs, aesthetics, fashions, whatever, end up both making you behave a certain way, and then as other people see you behaving that way, end up leading to-
– [Zubin] They spread.
– [Daniel] Spread, right?
– [Zubin] They spread, yeah.
– [Daniel] And so, our genes haven’t changed a lot. Our memes changed very, very, very quickly, and our behavior changes, and our culture changes, and our technosphere changes. So we are adaptive in a way nothing else is adaptive because it’s so software-based, right? Like our hardware is selected for the ability to do fast software upgrades.
– [Zubin] Ah.
– [Daniel] And so, I wouldn’t, it’s weird to talk about mental illness, ’cause in one way, mental illness is a destructive normative concept, ’cause what we define as mentally ill is based on the dominant paradigm of the time. So obviously, believing in animism was mental illness during oppressive Catholic Dark Ages.
– [Zubin] Right, right.
– [Daniel] And you know, women with mania, or hysteria, because they had a sexual impulse, was a mental illness at a certain time, and-
– [Zubin] Right.
– [Daniel] So to some degree, it’s like basically the psychiatric diagnosis is how to keep everyone behaving within that civilization model.
– [Zubin] Yeah.
– [Daniel] And it’s saying anything outside of that is bad, even though it might be quite healthy, and this was like Krishnamurti’s statement that being well-adjusted to a profoundly insane society is not a good measure of mental health.
– [Zubin] There you go, DSM-V criticism right there.
– [Daniel] And yet, on the other side we can say, so one way we can say the ways that people are are mostly actually the results of them being adaptive to their environment, not maladaptive, right, but being adaptive, but in an environment that incentivizes you to lie, and to not trust people, and to even weaponize trust by creating fake trust in people, so you can then, you know, take advantage of them, whatever, if that’s adaptive, right, you’re actually being environmentally trained to be that way. You’re getting feedback-
– [Zubin] That’s right-
– [Daniel] On being that way.
– [Zubin] I’m starting to smell a narcissism undercurrent here, right, with social media.
– [Daniel] So you think about social media, and there’s a gazillion things about it. But obviously, there, most of what comes is very short number of bits with some kind of dopamine hit associated. I’ve got some pictures, images, whatever, not that many bits, then the next one, nothing like very long text fields. Somebody’s gonna bounce on that. So it shortens attention span, and just hits a lot of hypernormal stimuli, which means then normal stimuli don’t do anything, right? Once you’ve watched enough of the super airbrushed pictures, then normal women start to look less pretty.
– [Zubin] Once you’ve seen enough porn, yeah.
– [Daniel] Then you can actually fetishize formally it, meaning you can’t eroticize without it.
– [Zubin] Right.
– [Daniel] In the same way that fruits and vegetables don’t taste good once you’ve done too much McDonald’s.
– [Zubin] Right, or even an Impossible Burger, which is apparently all vegetables, but tastes like crack to me.
– [Daniel] So you’re providing a hypernormal stimuli that then desensitizes the receptor to normal stimuli, right? Which means that it’s actually decreasing the potential from happiness and contact with the world, right? Now, you need a narrower thing with higher amounts of it and saturation to be able to get any effect at all-
– [Zubin] The same level, and not even the same level, sub-sub-level, yeah.
– [Daniel] So… So social media does all those things, but one of the things it does in the customizing of it, right, the personalizing it to you, if this is starting when someone’s a kid, they have this experience that the world conforms itself to fit their preferences.
– [Zubin] Hmm.
– [Daniel] In a way that no other previous technology would have ever had a corollary to. And so, if the whole time I’m on the digital world, it can, with very, it takes very little action on my part, right? I push a button, with very good UX to change everything. So I get a lot of change from very little action, and it’s all conforming to personalize itself to me. The contact with the world of atoms and in the world of real people’s gonna suck-
– [Zubin] The world of remodeling your house, for example.
– [Daniel] Well, even remodeling my house, I’m still, it’s gonna take more work, but I’m still trying to personalize the world to me.
– [Zubin] Right, but there’s a challenge. In other words, the UX on the social media app, you can personalize your world with a couple clicks.
– [Daniel] Way easier. But so much to say tribal dynamic that was an evolutionary process was nature is not modifying itself, what Eric Weinstein calls contact with the unforgiving. It’s not modifying itself to you at all. The rose is gonna fuckin’ prick you if you touch it. So you have to modify yourself to work with the environment, and you have to really modify yourself to work with the tribe, right? The tribe is not all going to bend over backwards for whatever your particular whim of the moment is. So is that conditioning narcissism at scale generationally in a way that nothing ever has along with addictiveness, short attention, orientation to sanctimony, certainty, et cetera, like those are really, so again, you take that humans are mold, like all animals are molded by their environment. Humans change their environment, so we’re more moldable to be able to become fit to new environments. We’re making environments that are more radically different than our evolutionary environments. Then we are becoming the psychological result of those things, and then are those the kinds of people we wanna be?
– [Zubin] Okay.
– [Daniel] Are those the kinds of people we wanna create?
– [Zubin] Okay, okay, so man, that’s, this idea that we are in equilibrium in some way with our tribe, with our environment, with our built environment, with our technology, and it actually feeds back into our own behaviors through these memetic doses back and forth, it really cuts to the heart of now are these, is this where we wanna go? Is this a civilization-level threat that we’re developing that self-terminates? And how does this relate to something like, say, Jonathan Haidt’s “Coddling of the American Mind” with Lakoff talking about exactly this. We’re now creating a generation of, say, fragile children and adolescents who have more diagnosable mental illness, who have more suicidality, if not actual executed suicides, and more inability to cope with the challenges that, say, the previous generations did not have as much trouble with. Does this all kind of tie together? I’m curious your thoughts on that.
– [Daniel] Okay, so let’s come back to this kind of ubiquitous subclinical psychopathology. Just as a reference of, you know, what I mean when I say subclinical is let’s say we’re talking about a nutrient deficiency. There is a clinical vitamin C deficiency, which is scurvy, which is a really, really low level, meaning you’re acutely dying-
– [Zubin] Arr!
– [Daniel] Below that level-
– [Zubin] Scurvy knave. Yup, you’re losing teeth, and all that.
– [Daniel] But there is an ideal level, and there’s a range in between the ideal level and dying of scurvy. I would call that subclinical deficiency.
– [Zubin] Yup.
– [Daniel] Even though it’s within the healthy range, meaning not yet the formal disease. And so, I would say there is a subclinical nutrient deficiency that’s pretty ubiquitous across lots of things. There’s an acute level of poisoning of a particular kind of petrol toxin, or heavy metal, or whatever, but then there’s any level above the evolutionary environment’s non-exposure to it that is causing some stress on the physiology that I would call subclinical toxicity. So the volatile organic compounds coming out of the paint, and the carpet that are carcinogens, or neurotoxins, or whatever. So it’s the idea that you can have something that is disruptive to the ideal homeodynamics of the system, but subclinical. It’s not yet causing the acute pathology, but it’s contributing to a causal cascade with a lot of other things that can have an effect. So that’s examples of that physiologically. The same is true psychologically. So before someone has diagnosable, you know, serious depressive disorder, they can be much more depressive than would be an optimal state for someone, and we can call it kind of subclinical. Before they have generalized anxiety disorder diagnosably, they can have a lot more background anxiety then is necessary for the human experience. And the same is true for OCD, the same is true for complex PTSD, right? Complex PTSD, where you have kind of an excessive trigger response, sympathetic response to some kind of trigger, but instead of an acute PTSD, where that’s on one event, it was a complex PTSD, meaning something that occurred many times. I would say it’s arguable that people have complex PTSD on civilization, right? Like they have this kind of continued trauma associated with lots of things that creates an increased sympathetic response to lots of things. That is a pretty common phenomena, right?
– [Zubin] And actually, just look at the pandemic. I mean, this has been happening throughout. You have this constant sort of fear, people doom scrolling, all the stuff feeding back together, and then people afraid to leave their house, people triple masking, and then other people saying, “I’m afraid my liberties are gone,” and reacting that way. It is this kind of pandemic of subclinical and even clinical anxiety, depression. I don’t know, I don’t think the narcissism component is there in that as much, because there’s a sense of lack of control to some degree, but who knows?
– [Daniel] Well, I mean, I would say, so increased narcissistic trait, subclinical narcissistic personality disorder, and I’m not saying this because I’ve done on the stats on it. I don’t think anyone has done the stats that I would want to see, so I’m speaking kind of intuitively here. First, I would say that many of these, like whatever the official stat on how many women have gone through sexual assault is, it’s just too low. It’s way too low, for a bunch of reasons, meaning almost everybody that I know has the shared consensus that a much higher percentage of all the women they’ve ever met went through some form of sexual abuse than the one in four that the statistic supposedly is. It’s more like 90%, or 85%, or something. So I think there is some kind of under-reporting, under-assessing types of issues that are part of it. And so, when we look at what percentage of the population has narcissistic personality disorder, whatever, that might be wrong, right? Those assessments might be wrong for any number of reasons. There’s reasons to hide that signal if you can.
– [Zubin] Yeah.
– [Daniel] But I would say that the percentage of narcissistic personality disorder, or antisocial personality disorder in the general population is probably much higher than it ever was in an indigenous tribe, in which case I would say it was probably much, much closer to zero, because it would have always been found out, and be non-adaptive.
– [Zubin] Yeah.
– [Daniel] And I would say it’s gonna be much higher as you climb up the stack of power and you have to actually win at lots of power games to get up there. And so, you know, there are some stats, I don’t know if they’re any good, but it’s like, you know, 5% in the general population and 30% in the C-suite of Fortune 500s, and probably something like 90% in the C-suite of top financial services companies, and-
– [Zubin] Yeah.
– [Daniel] So-
– [Zubin] It’s adaptive in those locations, yeah.
– [Daniel] It is selected for, incentivized, conditioned.
– [Zubin] Yeah.
– [Daniel] Et cetera.
– [Zubin] All the above.
– [Daniel] Which then also means that people with those conditions also have a disproportionate amount of power and influence institutionally.
– [Zubin] Which that feeds back on a population level, because the institutions are a big part of our socio-technological environment.
– [Daniel] So do I think that, you know, narcissistic traits are much higher than is native to the human condition under different developmental environments, and particularly tribal? Like tribal environment, you’re just not gonna get away with lying very well-
– [Zubin] Yeah.
– [Daniel] ‘Cause there’s too much surveillance. You’re not gonna get away with hurting people in a small environment, where everybody knows what’s happening with everybody. You’re not gonna be able to hide the effects of things, right? The increased transparency. But as soon as the system gets large enough that you can hide the effects, then you start having an evolutionary niche for those kinds of parasitic and predatory behaviors.
– [Zubin] It gets right back to the asshole agenesis.
– [Daniel] Exactly.
– [Zubin] The same exact thing that now it becomes not just okay, but there is a niche where it flourishes, and it may not actually be directly incentivized, but it certainly isn’t disincentivized, and actually there are, like you said, selection factors that actually enrich it in that population. And then, so this is interesting, because the idea that this exists now at a higher level, say, than in tribal times means that this is not something you reduce to genetics. It’s not something you reduce to DNA. It’s a complex biopsychosocial process that’s an interplay between us, our environment, our genes, our hardware, our software, and everything else, yeah?
– [Daniel] Let’s say we’re looking at genes and some particular psychiatric condition, or neurological condition. It’s like does everyone with APOE4 get Alzheimer’s? No, are there some people who have Alzheimer’s that don’t have APOE4? Yes, so that’s part of, that’s one type of causal dynamic. It is not a one-for-one causation. Typically, in biological situations, it’s not simple as one cause, right? It’s a lot of things that collectively can reach a threshold and you have nexus causation.
– [Zubin] That’s right.
– [Daniel] So are there genes that predispose certain kinds of things, but there’s a lot of other attenuating factors and whatever? Totally.
– [Zubin] Yeah.
– [Daniel] That’s what I think with regard, I think that the anxiety, the depression, the narcissism, the short attention spans, I think those are all conditionable, and I think they’re conditionable where, you know, I like to look at positive deviance culturally, like where a whole culture for some period of time had one, or two standard deviations more of some desirable trait. So I like to look at Jains for nonviolence, or Buddhists, or Quakers, and be able to see can you have a whole population, which means all the genetic mutation within that population, and all the kind of variance, but where the population as a whole its median violence is a standard deviation better than everybody else’s?
– [Zubin] So there’s a group dynamic-
– [Daniel] Which means that culture, right, actually was able to, with factoring genetics, whatever, attenuate that a lot, and the same as across very different environments, do Jews usually educate their kids better than a lot of other cultures do, and reach higher level of education? It’s very clear there’s a cultural effect that does that thing, right? And so, then you start to think about could you have something that had Buddhist nonviolence and Jewish education and whatever types of processes in terms of could you have a culture that was developing the, you know, a different set of traits in people that made a different set of civilization possibilities associated with a increased post-cynical possibility? Now, of course, we’re also looking at the fact that the technology is developing people itself. As we were saying, just simply the, simply the infinite scroll personalized to me type thing is going to have effects on attention span. It’s gonna have effects, and having an effect on attention span also means it’s gonna make me more one marshmallow everywhere, including offline.
– [Zubin] Mmm-hmm.
– [Daniel] Because I don’t have delayed gratification built-in. And-
– [Zubin] We see that now-
– [Daniel] It means democracy will end, because democracy requires a long enough attention span, a long enough working memory that I can hear someone’s perspective, hear multiple perspectives, hold them all in working memory, and try I to find a proposition that would meet many of their goals, right? Like if I really wanna be able to do collective intelligence well, it requires that, which is why, you know, Marshall McLuhan showed that not only did the printing press occur as, and the written word being accessible to everybody, everybody could get a newspaper, so you didn’t need a specialized knowledge class that had access, and everyone could get textbooks and read, not only was the printing press a prerequisite for democracy, but the written word as the primary type of media was probably required for democracy to work, because it required people to think well enough that they could communicate in writing in long form in a way that could then translate to legal code, right, to really think through things formally, and that they were reading, which meant increased attention span of non-dopaminergic stuff, which also meant enough working memory to hear multiple perspectives, to be able to find something that might work, come up with a good proposition.
– [Zubin] It’s like the Hamilton era where you, you know, they would write these big treatises, and they would argue in like 20 pages of text, and it was beautiful prose, and extremely, compellingly well-written, and persuasion and knowledge transfer all happened there, as opposed to, say, what now, where it’s a political statement on social media with like, dislike, anger that is designed to inflame rather than necessarily educate.
– [Daniel] I mean, you’re one of the very few public intellectuals I know who has publicly changed a position. Okay, first of all, I take offense to the term intellectual, all right? I’m dumb as a rock. Number next, yeah, I’ve done that.
– [Daniel] Yeah.
– [Zubin] I continue to do that, actually.
– [Daniel] Yeah, it’s super important. But you can’t have democratic process have any effectiveness if people don’t change positions, right?
– [Zubin] Yeah.
– [Daniel] ‘Cause then, otherwise, what you end up getting is voting on a binary, right? One candidate, or another, or yes, or no on a proposition, inherently polarizing the population, because what the proposition has created that benefits one thing, but it leaves some other stuff out, or harms something else. So the thing that you want that’s gonna help your kids’ lives be better, but it’s gonna fuck up the environment I care about, now we’re actually caught in a seemingly existential situation against each other. All the energy goes to infighting. I have to be able to find out what you care about and craft a proposition that will actually be a better synergistic satisfier and communicate to you in a compelling way. Otherwise, democracy’s gonna decay fairly quickly into this kind of situation.
– [Zubin] And I think we’re-
– [Daniel] Which means the ability to upgrade my own position with new information, the ability to model your position well, the ability to try to take a new position that’s informed by yours that I didn’t have before.
– [Zubin] So basically, the punchline is what we’re doing now with technology, or what’s being done, what’s happening, is eroding the very foundations of our ability to be an effective democracy, whereas is it affecting, say, an autocracy, like China, in the same way?
– [Daniel] Well, we know they controlled their internet, because they were aware of this, right? They knew the like, so the idea that both education and independent media were prerequisites for democracy was clear from the beginning of modern democracies, right, the idea that an independent Fourth Estate was actually requisite for an educated population. So education means that they have enough history and background knowledge and logical process that they can process information well, and the newspaper meant they’re continuously informed. The idea that we can all get base facts on a situation in common, then we can disagree what to do with them, or how to interpret them, but we have some shared base reality to agree on was obviously requisite for something like democratic process that we’ve thought about it. So you can think about the engagement-based personalized newsfeed as not just breaking the Fourth Estate, but being an anti-Fourth Estate. It’s really doing the exact opposite thing, because a Trump follower in Florida and a Bernie follower in Portland, their newsfeeds might not have a single piece of news in common-
– [Zubin] Not a bit-
– [Daniel] But they’re all gonna cover COVID, and climate change, and the Ukraine, and everything, right? They’re gonna cover everything, but with the most opposite valence. The only version of the other they’ll get is a straw man version.
– [Zubin] Right.
– [Daniel] And they will see so much news that conforms to their idea that it looks like everybody smart agrees, and only ridiculous people are on the other side, and the social network curates for people that are, you know, similarly agreeing on their side. So you get that kind of confirmation process. Can you run a democracy in a situation where, and like, okay, so remember how I was saying there was, I wanted to mention why social media is a different kind of tech. It’s its own unique class. I would say anything that creates natural network-based monopolies, Metcalfe law-mediated monopolies, and that’s important, right? There’s a reason we didn’t like monopolies before is because you get unchecked power. Our whole system was like let, unchecked power becomes fucked up. People don’t deal with it well, so let’s try to create as many checks and balances on power as we can. We’re leaving the kingdom for that reason to try to make a more democratic union. And so, the state, and the church, and the market will all be kind of separate and interacting. That’s a check and balance on power by itself. There won’t be one state religion. All the churches get to kind of compete with each other for the minds and hearts of people. The market means all the businesses compete with each other, and the state has three branches, and the three branches have checks and balances on power with each other. So we didn’t like monopolies, ’cause you could have a corporation if that has nothing that can compete with it that, and people start to need the thing that it produces, that, obviously, can now start to be, abuse the power, become corrupt, and it was competition that kept it from being able to do that, kind of basic market theory idea. Well, typically, the only way to get a monopoly was to do some activity, where you were forcibly crushing the competition through some kind of, you know, fucked up practices, or you got the government to grant you unique access rights, or whatever. So there’s a bunch of antitrust law stuff to prevent that stuff from happening. But with network dynamics that specifically the internet made possible, a natural type of monopoly emerged that was totally different in its process. It didn’t require those same fucked up processes, and yet you still get the function of something that doesn’t have competition checking it, right? So Metcalfe’s law basically says for something where the value of the thing is proportional, the value of the thing involves user-generated activity, and the more users that are involved, the more valuable it is. It’s not just linearly more valuable. It’s second power more valuable. And so, a currency that only a few people use is not that valuable, ’cause I don’t want a currency where I have to have 50 different currencies in my wallet for the different stores that all take different currencies. One currency that’s usable everywhere, like a monopoly on money, which kind of the Fed is, was, there was a very high use value in that. Similarly, on Facebook, every fuckin’ person I know is there. I don’t want 30 different social networks, where I have to have 30 different logins for different groups of people, and I can buy everything I want on Amazon, and I can find everything I want on Google, right? So when you have a network type dynamic where the more stuff you have, the more value you have, and as a second power, once you get up to a certain percentage of the population involved, you reach an escape velocity, where you will become a monopoly, and you’ll have one search engine that has radically more dominance than all the other search engines, one online store, one social network that has radically more within its vertical. And you might get something that can get a lot of dominance, but in a slightly different vertical, like Twitter relative to Facebook, but it’s a different media type.
– [Zubin] Right.
– [Daniel] And so, that creates natural monopolies we don’t currently have a way to check, and at huge scale intrinsic to the process. It’s at a huge scale of social influence, right? Facebook and Google both have like 3 billion people, which is the population of China, and Russia, and the US, and Iran combined.
– [Zubin] They’re bigger than nation states.
– [Daniel] Bigger than all the biggest nation states combined. And so, you’re like, “Okay, that’s fuckin’ huge deal.” There were no corporations that had that kind of thing before. And if it is customizing what they’re exposed to with personalized data and AI optimization, so that combination of things, so the Amazon recommendations are doing that, and Google is doing that, YouTube is doing that, Facebook’s doing that, then what we have is we have a massive scale, right, maybe trillion dollar, maybe multi-billion person, massive scale population personal data, AI-optimized behavior mod system with no checks and balances on the power of it. That’s not like a company the way Adam Smith thought of companies.
– [Zubin] Right.
– [Daniel] And when you take that if it has an ad model, where the more time people are spending on site, and the more engaged, the better the ad revenue’s gonna do, and it has a fiduciary response way to maximize value to the shareholders, so it has to keep optimizing that thing, the AI’s gonna optimize for engagement, and the one-marshmallow sticky shit is just more engaging, right? Because otherwise, I don’t plan my day, and say, “I wanna spend seven hours upset scrolling Facebook.” I say, “Fuck, I don’t wanna spend any time on Facebook. “I’m just gonna check it real quick,” and then I get stuck, because something that was hypernormal enough brought me into a rabbit hole that obscured my prefrontal assessment of the amount of time I don’t wanna spend on Facebook, right?
– [Zubin] You got hijacked.
– [Daniel] I got hijacked.
– [Zubin] Yeah.
– [Daniel] And so, it does not have the incentive to keep me intentional. It has the incentive to capture my intention, right? And so, if you have something that does that, it’s going to increase all limbic process, so outrage, and addiction, and et cetera, which also includes polarization, and tribal identities, and certainty, and sanctimony, right? It’s gonna increase all those things. So it’ll double down on bias, rather than undo bias. It’ll double down on tribal in-group kind of stuff, and shorten the attention span, so you couldn’t even listen long enough to hear a counter perspective. And it wouldn’t be dopaminergic enough to maintain the attention span. So that can’t not polarize the population. And so, China was smart to be like, “No, fuck it, we’re not gonna let that thing emerge. “It’ll ruin our nation.” And so, so if you polarize the population with increased certainty, outrage, everybody thinks their thing is an existential risk, if Trump gets elected, it’s the end of America, if Trump doesn’t get elected, it’s the end of America, whatever, right, then they, the people will select more polarized representatives. The more polarized representatives in Congress and Senate like that can’t cooperate with each other at all, so everything gets gridlocked. Everybody filibusters each other, and whatever. An increased gridlock system can get slower and slower, and more and more expensive capacity to regulate tech that is moving faster and faster. And if the regulatory system has slower feedback loops than the thing it’s trying to regulate, it will just lose the ability to regulate it. And so, a tech that polarizes the population, and polarizes the representative class, that creates gridlock, that decreases the capacity of governance, will make that governmental system lose to another system that doesn’t have those problems going on that doesn’t have all of its energy going to infighting, that doesn’t have gridlock in its decision-making process, and-
– [Zubin] Has a longer term outlook, too.
– [Daniel] It has a longer term outlook, because you don’t think that in four years everything you did will get undone.
– [Zubin] Yeah.
– [Daniel] And so, in the same way that the printing press kind of ended feudalism and started to make modern democracy possible, this tech is kind of ending democracy as we know it, and orienting towards authoritarianism, unless we start to redesign it in a fundamentally different way.
– [Zubin] Okay. Anybody who hears that, and isn’t absolutely terrified that the actual possibility of complete societal and democratic decay is real as an existential threat was not listening to what you just said. And I’ll say this, because we’ve talked, we’ve interspersed throughout the conversation what are actual possible solutions, like how do you get to post-cynical reasoning? How do you get to an integral level of thinking? How do you have rational discourse, and make sense, and have these sort of commonalities? With social media, and I’ll tell you, I’ve tasted this, because look, I have 2 1/2 million, 3 million if you consider Doc Vader, too, subscribers on Facebook, I don’t know, half a million on YouTube, I don’t know what, half a million on Instagram, so I am complicit in this feedback loop. And yet, here I am saying, you know, “This is terrible, we’re all gonna die,” but there are these interesting, there’s these interesting new emergents that are happening, like little, small spinoffs, like Locals, say, which it has certain values, like we don’t, we’d never cancel anybody, we don’t censor anybody, the creators set their own communities, they set the tone and the spirit of what they’re doing, and it’s pay to comment, so there’s a barrier to entry, where you’re actually paying for the product. You’re not the product yourself. There’s no advertising. And I’m on that. And the difference in tone, in post-cynical thinking between Facebook, where I have people who also are in a supporter tribe, and they pay to be there, too, and that place, where the whole structure of it is different, it’s a different application of the technology, is night and day, and people who are in both places will tell you, “Yeah, there’s no competition.” People are more, they’re more collegial, they have different opinions. I don’t have to moderate at all, even though people get to post whatever they want. So is it just a question of intelligent design of this stuff that’s looking at the outcomes we want, even though we can’t really always predict what the second-order and third-order effects are?
– [Daniel] Yeah, so this gets us back to the idea that we make environments that are in turn conditioning us. And yet, we’re not necessarily being very intentional about all the ways that they’re gonna condition us. We’re making those environments for usually some very narrowly defined purpose, and usually some profit purpose.
– [Zubin] One-marshmallow tricks, yeah.
– [Daniel] And so, everybody familiar with broken windows theorem?
– [Zubin] Probably not.
– [Daniel] So environmental psychology, it just has a huge effect. You take a ghetto area, where you got spray paint, and trash, and broken windows, and you just simply replace the broken windows, clean up the graffiti, and trash, and the crime drops something like 50% instantly. And you take an area that’s good, the windows break, some graffiti comes, you don’t clean it up, and crime boosts almost instantly. This is the same people, same genetics, same neurobiology, same whatever, who are more, or less likely to be criminals simply based on what they’re observing on if the windows are broken, or not. This is pretty fundamental type of stuff, because, of course, one is signaling this is a place that nobody cares about, this is a lawless place, whatever, the other signaling something different. That’s just the beginning to the field of environmental psych, and the way that environments affect us, and you walk into a cathedral, and the technology of the cathedral has an effect. Like you look up, and when you look up, you feel small, and you have some sense of vastness that starts to come. And you know, even just, you know, I was mentioning that the volatile organic compounds built into the paint and the carpet are subclinical toxins that, you know, like the formaldehydes, the organophosphates. We did not even evolve to process these things.
– [Zubin] Right, they didn’t exist.
– [Daniel] They were not part of what our liver and kidneys were supposed to deal with. So that’s a way that we built for a very specific purpose, or we started putting fungicides in paints to prevent internal mold that then started to also make mutated species of molds, and whatever. We were designing the paint for some very specific purposes, but it produced second and third-order effects we weren’t paying attention to. But even low ceilings, whereas, you know, tribal people that mostly lived outside, you looked up a whole lot more, there’s neurology to looking up, right? It actually stimulates different types of cognitive process-
– [Zubin] Yeah.
– [Daniel] To have your head and your eyes go up. So the cathedral knows that, puts things up there on purpose that stimulates a certain kind of process. So that’s like environmental design, right, environmental psych applied to environmental design. The term ontological design is wherein we’re designing environments that are in turn affecting our own future ontologies, our own future experience of the world, beliefs, ways of being, et cetera. And so, I think most people know that if you have experienced, if you’re in an old growth forest, or if you’re in a beautiful temple, or if you’re in a ghetto, you feel different instantly just simply based on the nature of the environment, the lighting, all those things, digital environments even more so, right? Because digital environments can be so extremely different, have such extreme personalization, and reward dynamics, and all those types of things. And people are spending more of their time in digital environments that we’re obviously not adapted to, and that are designed for are mostly extractive purposes regarding you, rather than developing you in positive kind of ways. So if you think about the like, okay, okay, the first part is to say tech is affecting our value systems and our experience of what it means to be people individually and collectively. Tech is not values agnostic. That’s the first kind of construction, and I’ll come back to this other part. Tech is not values agnostic. If you read Marvin Harris, “Cultural Materialism,” it’s a, you know, a very empirical school of cultural anthropology that basically says the changes in our social systems, and even the changes in our religion, and culture, were usually proceeded by changes in tech that coded for certain patterns of behavior that coded for certain value systems that were associated, so-
– [Zubin] The plow, for example, yeah-
– [Daniel] You wanna give that one?
– [Zubin] So you know, people used to, this is a fascinating one, and God, this, you know, this is central to our experience as humans, and experience in healthcare, like taking care of sick people. We don’t, we pull them out. We left-brain yank them out of their context into a room that’s cold and sterile, and we say, “Here’s what wrong with you “based on these extracted tests,” whereas where we’ve totally decontextualized these humans. Anyway, so back to this plow. Pre-plow, societies tended to be very interesting. They had maternal female gods. They had this kind of idea that women would actually do a lot of the farming and stuff, because they could, and the guys would go out, and do hunter gathering, and things like that, and it played to their strengths at the time. Then the plow gets invented, and this, here’s the technological thing. Now, several things happen. First of all, you can use an animal to pull the plow, which now changes the theology and the animism around our worship of animals, ’cause now if you make an animal, if you elevate an animal to some sort of godlike creature, and you’re forcing it to do this manual labor, well, that doesn’t really vibe. So now, mm-mm, the entire culture shifts to where it’s like, “No, God gave man the dominion over animals, “and so now, use it as you will.” It’s less than a sacred thing in itself. And women now in order to man sort of things like plows and things like that, it becomes very physically demanding. They would have miscarriages, et cetera. So now it became disadvantageous for women to do the farming, so they ended up then taking different roles, which then changed the, because the production of food is associated with deities, et cetera, now you have male-dominated type gods. And so, the entire course of civilization changes from an invention, and these are spiritual changes, these are emotional changes, these are social changes, and they’re technological changes, and they all interact in this web. It’s fascinating.
– [Daniel] Obviously, the plow gave surplus in a way that we didn’t have before, which was really key, because now you can have grains, and in a gathering environment, grains are not that useful. You gotta have a lot of grain to be really useful. So the combo of the plow and the basket means now we can have really meaningful caloric surplus, so we can also grow our population much faster, and we can make it through famines, and we can start to have concentrated wealth. As a result of that, ownership starts to mean something, and property rights start to mean something, and then class systems start to emerge, and then civilization structures to be able to deal with class systems start to emerge, and then if you own stuff, then inheritance, and so, then the formalizing of marriage, and the changing of those contracts to be able to have paternity certainty associated with the inheritance of the stuff, like all of those things follow that kind of tech, right?
– [Zubin] And that’s a plow!
– [Daniel] Yeah-
– [Zubin] That’s a plow.
– [Daniel] So then, I would say, and of course, the history of this is actually a million times more nuanced. I just want people to get the gist of the kind of thing. But the animism is just such a great example. I can bow and arrow kill a bison, and still pray to the spirit of the bison, and say, “When I die, I wanna become soil that becomes grass “for the grandchild of this bison, “and be a part of the great web of life,” and everything. But I can’t take the bison, and hybridize it into an ox, yoke it, cut its balls off, and beat it all day long to pull the plow, and still believe in the spirit of the buffalo, right? And so, to use a plow require, like the plow codes a pattern of behavior, which is beating an ox all day long, and that requires a change of value system. And yet, the tribe that doesn’t do that is just going to lose, because they have so much less caloric surplus and whatever than the tribe that does that. So this is an example of how the tools change the value systems. There are other examples where you can make value system changes independent of tools that will in turn change how we use the tools, like the Sabbath was kind of a social technology for how do we not use the tools for some period of time, not to be hijacked by productivity, orient towards the value system in each other, to make sure that we’re in turn directing those things.
– [Zubin] Ah.
– [Daniel] That’s one way of thinking about it. So we have to actually get this direction of developing culture to bind the tech and inform the social structures. But you can also build tech that in knowing that the tech is going to influence the psychology, the values, and the behaviors of the people, that should be intentionally factored in the design process. So then both the design of Facebook, the design of the monetary system, the design of the Anthropocene, the design of the paints, and the building, and all those things, you’d say if in addition to whatever it does, right, it’s supposed to answer phone calls, it’s supposed to be able to give me access to information, it’s supposed to be able to keep rain off, in addition to that, it’s also affecting the psychological, physiological, relational health and dispositions of the people. So we say the number one design constraint of all the tech has to be that the ontologic design effect it has is enlightening to the people that it’s interacting with, right? So how do you, then the ontologic design question, how do you rebuild civilization from scratch? How do we rebuild the technosphere and Anthropocene from scratch to maximally enlighten the people that are within it, right? So both increasing their own health, happiness, well-being, and their sense-making, meaning-making, choice-making capacities for collective well-being. I think that when you start to realize it doesn’t take everybody having a deeper understanding of these things for them to start designing tech differently that will start to affect everyone. Obviously, it took not that many people to build Facebook that then started to affect 3 billion people, and as a result, also everybody, right, because in that effect it was on broadcast media, and culture, and what the US government, and military, and economy is doing, and it wouldn’t take that many people to build something that was more protopic in the nature of what the tech influenced people to do behaviorally. And so, yeah, that is one of the sources of optimism.
– [Zubin] Yeah, and I’m actually, it’s tricky, because threading the needle requires the optimism that it can be done. And I think it can be done, but as you’ve said before, I think the narrowing of the window through which we thread this needle to avoid existential catastrophic civilization-terminating issues, whereas, you know, if you look at the cosmos, and go, “Why are aliens not here?” Of course, this is very simplification. Is there a great filter out there, where civilizations all go through this procedure of narrowing and narrowing and narrowing until poof, most of ’em don’t make it, and are we gonna be one of those? So it’s not, you know, I think Adyashanti, who’s a kind of a spiritual leader guy, has said the entire universe could blink out of existence, and the true fundamental nature of the universe, the true reality, wouldn’t even notice. And so, this hubris that somehow how we’re special and can survive I think is not quite right. So part of it is a more thoughtful design. Part of it is group sense-making, I think, of like how are we going to, what do we value? What matters to us? You mentioned the Sabbath, which I think is very interesting. My family and I, and I have done videos on this, do a thing called screen-free Sunday, where we turn off all the devices, and we open up to each other again, and we’re very present, and so on, and even my kids don’t have social media, or any of that, but they do use devices for other things. And it’s been transformative for us, even that one day-
– [Daniel] Right.
– [Zubin] And I went on a meditation retreat recently, six days, where I was completely unplugged, and I’ll tell you, if we think we’ve permanently damaged ourself, we’re wrong.
– [Daniel] Yeah.
– [Zubin] Because I’ve never, I mean, I was wide open, connected, feeling love for people, and when I came back and did a couple shows, it was actually against my will to some degree. But when I did it, you could see those shows. Like it’s a different me. I’m like, “Guys, let’s really “just try to connect this together.” And I think that’s doable. I think it is, but it takes some collective will.
– [Daniel] I have a colleague Zak Stein, who helped build a system called Lectica that is one way of assessing a certain kind of human development, the complexity of someone’s thinking process. And they’re able to kind of look at someone’s writing of certain types, and be able to assess the complexity and nuance in their thinking, and made progress in being able to do that with machine learning and natural language processing. So, and what that would mean, the complexity and nuance of thinking, is the ability to take lots of different perspectives, the ability to notice bias and error in those, the ability to notice ways of synthesizing and finding higher order reconciliation. So let’s say, for instance, and this is really just, I’m throwing this out as a suggestive way of where something could go, not that this is the exact proposition. So let’s say we built a Facebook-like thing that had lots of different types of content that were being personalized to the individual, but that didn’t have a business model that required ads to do it, so it had a different basis of optimization, wasn’t trying to maximize time on site, or engagement in that way. Let’s say that the thing it was trying to maximize was your Lectical Score.
– [Zubin] Hmm.
– [Daniel] And so, it was looking at where you were typing in responses, and your general engagement, and it was wanting to see that your ability to, and your disposition to seek and steel man more perspectives, and have more nuanced perspectives, more ability to orient towards empiricism where it was relevant, et cetera, that those were the things it was optimizing for, and it was curating content to you both based on what would be outside of what you were already exposed to, but also in the zone of proximal development that you could grasp it. And again, empirically upregulating in that way, that would produce really different results, right? You could be growing attention span, rather than decreasing it. You could be growing multi-perspective, multi-perspectival kind of capacity through the nature of what that AI pointed at your brain is optimizing for.
– [Zubin] Okay, I think that’s a doable thing. It would depend on the, again, like you said, what are you optimizing for in terms of also incentives? Like Facebook went viral, because it’s hacking into the capitalist incentive of this is how you make money through ads, and so on, and it just became self-replicating that way. But like sometimes, YouTube’s algorithm will do that. It’s quite remarkable. It will actually point, and I get this, and this is why, because people who, I do some meditation, awakening type content, consciousness content, and YouTube’s algorithm seems to feed that content to people who have a vague interest in it, but also may overlap slightly with healthcare. It kind of knows this in some way, and people will message me, and go, “I don’t know what, I don’t know, I never knew who you were, “but I’m a nurse, and I’ve been having “this existential crisis, and I found your piece on,” whatever, “Donald Hoffman, and then it went me, “then YouTube fed me a rabbit hole of your stuff, “and I’m so much better for it, “because now I’m interested in this, “and I went to a retreat, and I did this.” So you can see the virtuous stuff happen. So the question then that you bring up is can it be optimized actually in a way that that’s the endpoint you’re actually looking for, that Lectical Score that Zak Stein, you said, is working on in development.
– [Daniel] Yeah. Zak could be a great guy to have on your show-
– [Zubin] We’ll have to do it.
– [Daniel] To discuss it. Tristan would be a great guy to have on your show, because YouTube made some adjustments based on some of the criticism that happened, and I don’t, he knows this topic much better than I do in terms of where adjustments were made, and where things have actually progressed, and-
– [Zubin] I’d love to hear that.
– [Daniel] And where they could progress more. And I can, like there are reasons that you don’t wanna optimize for Lectical Score. That’s why I said it’s just the beginning of a thinking, because if a Lectical Score is the only thing you optimize for, someone who has more advanced theory of mind-
– [Zubin] Yeah.
– [Daniel] Without necessarily more advanced compassion, or embodiment, can be a more effective sociopath-
– [Zubin] Psychopath, yeah.
– [Daniel] Yeah.
– [Zubin] Exactly.
– [Daniel] And so, I don’t think you want to optimize for anything measurable, right? The thing that is measurable will have externalities. Even a weighted algorithm of N measurable things will have stuff that’s really important that’s not included in it that if you’re optimizing for you’ll end up having the externality. So you can optimize, but you want a process that what you’re optimizing for is itself being continuously iterated.
– [Zubin] A recursive process with the optimization parameters.
– [Daniel] Where the externalities of what’s being harmed by the optimization are continuously being internalized. But that means you also need a governance system that can do that, which means you can’t be bound to something like fiduciary responsibility for profit maximization.
– [Zubin] Okay, okay, so to internalize the harms that are being generated, man, that requires a whole different way of doing things, right? Because if the fossil fuel industry had to internalize the environmental harm of their actions, it would be a very different industry. I mean, how do you even begin to think about doing that?
– [Daniel] I mean, it’s interesting. You’d actually have to have a process to do that, because if you mandated today that the fossil fuels industry had to internalize all of the costs it externalized, the civilization would collapse overnight. Yeah, yeah.
– [Daniel] Because-
– [Zubin] There’d be no energy.
– [Daniel] There’d be no energy, because the energy’s just priced at how much it costs us to extract it-
– [Zubin] Right.
– [Daniel] And then a little margin to sell it.
– [Zubin] Right, not the common costs of the harm.
– [Daniel] Not the costs that took nature to produce it, or that it would take for us to be able to produce it through some other method, and then the environmental harm, social harms, whatever that happened. If I wanted to be able to pay for all of those things, probably my cell phone is $10,000.
– [Zubin] Yeah.
– [Daniel] Right?
– [Zubin] Yeah.
– [Daniel] And so, there’s a huge amount of externalized costs, and energy is really fundamental in this one. But that is a perverse incentive, right? There’s a fundamental perverse incentive. So you would want to be able to identify those externalities and internalize them where, and what that would do is it would bind the incentive of the agent with the well-being of the commons, right, that the company only got better when it was doing things that were comprehensively better for the world, not got better while fucking the world in different ways.
– [Zubin] Yeah.
– [Daniel] So how do you internalize externality, and make, and close perverse incentive niches is one of the fundamental things humanity has us to get better at, right? Like we have to fundamentally change our socioeconomic systems to do perverse incentive minimizing much better than they do. How to do that is a complex topic, but-
– [Zubin] Hmm, and how much of this do you think is gonna require new technology that enables this kind of thing? Like you said, like what Zak is working on with the machine learning that can generate these Lectical Scores, versus how much of it is more related to your Sabbath point, which is-
– [Daniel] That’s a technology. Depending upon how you wanna think of it, right?
– [Zubin] Yeah.
– [Daniel] That’s a social technology in the same way that whether it’s institutional marriage, or whether it’s tribal council, or whether it’s, you know, Sabbath, it’s basically that is a technique.
– [Zubin] Right, right, right, right-
– [Daniel] That can be, that also requires abstraction to understand why it’s an important thing, and that can be taught and passed down.
– [Zubin] Right.
– [Daniel] So I think when I say social technology, I’m not referring to things like Facebook. I’m referring to things like sweat lodges-
– [Zubin] Right, so that’s what I was, that’s what I was getting at. Does it require that sort of technology, meaning self-awareness, self-realization, self-improvement technologies, like sweat lodges, psychedelics, guided psychedelic therapy, meditation, spirituality of any kind, that sort of thing? Do you think that’s necessary for humanity to survive?
– [Daniel] Just you’ll appreciate this. The first time I got to do a sweat lodge, I was invited to do it on a reservation. The shaman leading it, there was one point where, it was my first time there. There was another white guy there whose first time it was, who, about 3/4 of the way through, freaked out at the heat, and kind of ran outta the thing screaming. And the old shaman was sitting right next to me, and he whispered in my ear. He says, “We never trust someone “who would bail on the sweat lodge.”
– [Zubin] Oh!
– [Daniel] “Because if he’s so scared that it’s bad, “that this is too hot for humans, “he just left us all in here.”
– [Zubin] Oh!
– [Daniel] And so, either he really doesn’t, he really knows it’s safe, and he can’t control his mind-
– [Zubin] Yeah.
– [Daniel] Or he doesn’t think it’s safe, and he was willing to save himself, and bail on everybody.
– [Zubin] Wow.
– [Daniel] So you could see there was a social technology-
– [Zubin] Yeah, yeah, yeah-
– [Daniel] Right, where it’s gonna get hot enough that you know, because the guys who’ve been doing it before of generation, generation, you know intellectually it’s not gonna hurt you, but it’s gonna get hot enough that it feels like it’s gonna hurt you.
– [Zubin] Yeah.
– [Daniel] Which means that you have to watch that fear and not succumb to it.
– [Zubin] Yeah.
– [Daniel] And you’re all in it together, and you are experiencing a shared hormesis, right? Like you’re overcoming some difficult thing, but all together. There’s also, there’s a million other aspects that we can be like oh, wow, there was a really fuckin’ potent, insightful thing-
– [Zubin] Yeah.
– [Daniel] That they figured out that we have not replaced, right? We’ve lost a lot of those things, and we don’t have replacements, and whether it was that, or you know, we could go into all the types of rites of passage and tribal technologies, but they weren’t for no reason, and they weren’t just frivolous. They serve fundamental roles to the social coherence.
– [Zubin] It’s funny because, you know, when I was younger, I was much more of a scientific reductionist. I thought pretty much everything could be boiled down to processes that are explicable in the third person, and its sort of domain, and rituals like that, I always thought, including religion, including cathedrals, including, “What a waste of money. “Why are we doing this? “These are dumb,” even marriages, like this whole ceremony, like, “Let’s just go have a civil ceremony. “This is just dumb.” Boy, was I completely wrong. The older I get, the more I realize these are crucial interwoven fabrics of human development and social development and all the things you just said. So, and again, I constantly talk about this meditation retreat, because it’s the most, it’s the easiest thing I can talk about, it’s the most recent. This idea that you get 30 people together, there was a small group, and a lot of them were my supporters on the show, but they’d been drawn to it, because they feel the call to do this thing, and this guy, Angelo DiLullo, who’s a physician who wrote this book actually, “Awake,” about enlightenment, awakening, and it’s really kind of a zen-influenced kind of thing, and long story short, we went there, didn’t know what to expect, silence, but in the evening we would do group activities, and those group activities were, everyone was reluctant. They’re like, “What, we’re gonna do improv comedy? “We’re gonna do what? “This kind of circling and storytelling and emotion work?” It was the most connected, open, amazing experience, and it pulls into the real world, kind of like I went to Whole Foods to order a slice of pizza, and I was so, I felt so connected to the guy making the pizza who was having, looked like he was having a bad day, that he could feel that energy. I mean, there was like this nonverbal communication that happens, and he just gave me the biggest slice. He’s like, “Hey, thanks,” and I’m like, “Thank you!” I mean, this crazy, stupid, intangible stuff that sounds woo-woo, but it’s absolutely real. So that’s why I ask like how important is that sort of technology, you know?
– [Daniel] So it’s critical, and it’s not like you-
– [Zubin] ‘Cause we’ve lost it to some degree.
– [Daniel] Totally. It’s necessary, but it’s not sufficient if we were to still be making physical tech that is also conditioning human psyche and values and conferring power that was doing the opposite, but with exponential tech influence scale-
– [Zubin] Push and pull.
– [Daniel] So we have to be designing the tech differently on the physical side, designing the incentive systems and the deterrent systems differently, and working at the purely psychospiritual, cognitive culture level. And so, if you think about the way Marvin Harris talked about society, or any civilization being a function of three interacting things, you have an infrastructure, a social structure, and a superstructure. The infrastructure is the physical tech that mediates our physical needs and relationship with the physical world, so water, food, transportation, production, waste management, all that kind of stuff, and obviously, all of the technosphere. The social system is basically the collective agreement field, so government, law, economics. The superstructure is kind like culture. It’s like what is the good life that we’re ordinating towards? What is the existential reason for any of it? What is the ethical basis? And so, we can see that religion has become pretty hollowed out. We can see that patriotism, that most of the things that bound someone to more than themselves, and maybe just their immediate family, are pretty hollowed out. And they’re replaced with, you know, you talk about kind of the quasi-religions that are where people have some kind of tribal identity that feels like it’s associated to some preventing the rapture, and making some kind of better heaven on earth, or whatever possible, but they’re shitty religions, right? The new ones are fairly poorly designed for the purposes. Right.
– [Daniel] But while Harris argued that infrastructure drives changes in social structure and superstructure, there are other social philosophers who will argue why each one has causal arrows pointing in the other directions, and it’s all true. So what we could say is how to make a fundamentally viable civilization requires a certain amount of changes in all three simultaneously that lead to a virtuous process. Now, my social systems, let’s just go ahead and take government as a basis, or you know, governing institutions, ’cause the market will create a bunch of incentives, but it’ll have incentives to do some things that suck for the whole, which is why we have the idea of rule of law that we can collectively say no, we don’t wanna cut all the trees down. We want national forests, or we don’t want organ markets, or whatever it is. So we’ll make a law around that thing, and then we’ll give something like monopoly of violence to be able to enshrine rule of law and do enforcement. So that’s able to basically bind market and tech. It’s not anymore, right? It’s been hollowed out in a way where it really can’t, and that’s important. But if that is not derived, that social structure that is designed to guide the infrastructure and the power of it, if that social structure isn’t derived from the superstructure, right, which is that the consent of the govern is where the power of the government comes from, that it is a government of, for, and by a comprehensively educated citizenry, then it will be authoritarian. And so, either you don’t get institutions that can manage the complexity, and you get catastrophes, or you do get institutions that can manage the complexity, and they create order through imposition, and they’re oppressive. So you either get catastrophes on one side, or dystopias on the other, or you have to actually have a culture that is developing the capacity to make sense of the world individually and together to meaning-make the world individually and together to be able to do choice-making individually and together, so that a comprehensively educated, informed, capable of post-cynical, good faith communication population engaging with each other can create and oversight and watchdog institutions that have the capacity to engage and bind the technology, so that we are safe stewards for the power of exponential technology.
– [Zubin] All of those things have to kind of happen. That’s the thing, they have to happen interwoven, interpenetrating. It can’t just be one, or the other, and again, I feel like, you know, I’m gonna quote Iain McGilchrist, who’s written a book, “Master and His Emissary,” about the difference between the right and the left brain and how the left brain really as the emissary that now thinks it’s the master really loves to take parts and work on parts and make wholes out of the parts instead of looking at a whole, whereas the right brain is silent, doesn’t really use direct language, but it sees things relationally, and interwoven, and so on. And as civilizations start to progress, it seems they shift to a more bureaucratic, left-brain, kind of absolute certainty, reductionism, et cetera, and I worry that unless we can reintegrate with the right, you know, form a kind of corpus callosum that connects the two, we are gonna fail at this avoiding dystopia, or avoiding catastrophic risk. I mean, I’m curious your thoughts on that.
– [Daniel] If people haven’t seen ’em, everybody should watch the dialogues between David Bohm and Krishnamurti.
– [Zubin] Oh, wow.
– [Daniel] Some of my most favorite dialogues ever, and David Bohm is talking about “Wholeness and the Implicate Order” and why it is a fragment, and you know, if people don’t know, one of the top physicists who advanced quantum mechanics following Einstein, and, but also a really deep philosophic thinker. And so, the dialogues between he and Krishnamurti were super interesting on these topics.
– [Zubin] I’ve heard about this. Supposed to be remarkable.
– [Daniel] Yeah. And Krishnamurti was, of course, always asking what is the fundamental basis of conflict, and what would it take to have a civilization that could actually deal with conflict-
– [Zubin] By the way, who was Krishnamurti? Maybe fill these guys in, so.
– [Daniel] I mean, there’s a whole interesting story of his relationship with the Theosophical Society, and like that, but just a very interesting 20th century thinker that came from what would’ve seemed like a esoteric spiritual tradition, but was a really, really keen intellect, and was really focused on what would a civilization that could actually deal with conflict well require in the development of people individually and at scale as our capacity for the consequentiality of conflict is growing, right? Post the bomb, it’s a different story, because conflict before had a limited scope of how many people would be harmed. And now, you have a situation where conflict can escalate to kill everybody.
– [Zubin] Yeah.
– [Daniel] And obviously, that was when you had one type of catastrophe weapon. Now, with bio-weapons, and AI weapons, and cyber weapons, and drone weapons, we have way more catastrophic weapons in a much more highly interconnected, fragile world with a fragile ecosystem close to planetary boundaries with way more actors that have it, where you can’t do mutually assured destruction. So if you don’t figure out how do we deal with conflict better, so of course, one answer is, well, just ubiquitous surveillance and control everyone, which is the dystopia to avoid the catastrophe. The other answer has to be a authentically better self-organizing collective intelligence. And so, this, you know, Krishnamurti was obviously prescient seeing this, and being like, “We actually have to deal with the origins “of conflict better.” And so, you can get into Marx’s ideas of class conflict, and Girard’s ideas of mimetic desire, conflict, and you know, you kind of have to go deep on where those come from to say what would it take to deal with them well-
– [Zubin] Right.
– [Daniel] But Bohm was looking at the orientation of a mind that mostly thinks in words, of Western mind, you know, in particular, to break reality into parts, and make sure that our word, the symbol that would correspond with the ground there corresponded with the things that it was supposed to, and not the other things, so try to draw rigorous boundaries to, you know, divide everything up, led to us fundamentally relating to everything as parts first. And so, then you get stuff that doesn’t make sense. Like you bring parts together, and there’s some new property that none of the parts have, and you have to call it emergence, and it seems kind of wacky, but it was only because that property was there from the beginning and you took it into parts-
– [Zubin] Parts-
– [Daniel] And-
– [Zubin] Oh, fascinating! So you call it emergence because you started with the parts, and you built it up, whereas it was a whole to begin with, and ah, so what was missing, you’re calling emergence.
– [Daniel] Exactly.
– [Zubin] Ah, fascinating.
– [Daniel] And so, whether you have one country trying to benefit itself at the expense of another country, or all of the countries trying to grow their GDP at the expense of the ecosystem, or a proposition that is benefiting something, but harming something else, and polarizing the population, or a particular metric you’re trying to optimize for in healthcare that causes iatrogenic cascades to damaging something else, or whatever, the highly interconnected world where the solution, or what you’re optimizing for, is made a narrow subset of everything. The thing that you’re doing is still interacting with complexity. So whatever it’s affecting outside of what you’re intending to affect will be negative externalities. Some people will care about those. They’ll respond to those. You’ll keep driving conflict, driving externality. With exponential population and exponential tech, that thing self-terminates. So how do we, we can’t do exponential externality. So you have to do a better job of not doing externality, which means that the goal is not to optimize for GDP. It’s something broader than GDP. It’s not just GDP per capita. Okay, we need to add a Gini coefficient. Yeah, but that doesn’t include personal happiness, whatever, so we need to add a national happiness index. Yeah, but that doesn’t include environmental stuff, so we have to add a CO2 thing, but that doesn’t include nitrogen, so-
– [Zubin] Parts, parts, parts.
– [Daniel] You keep adding, and yet there will always be stuff in some finite set of metrics you’re optimizing for that matter that are outside of that set.
– [Zubin] Yeah.
– [Daniel] So how do you actually relate to the whole, and say the thing we’re trying to optimize for is not definable, right? That’s what I would say the first verse of the “Tao Te Ching” was about, right? The Tao that is speakable is not the eternal Tao, and that’s what I would say the no false idols thing was about, that as soon as you say, “The thing we wanna optimize for is X,” it’s not really it, right? That’s not. The thing you wanna optimize for is the nature of the sacred. You try and define it, you’re gonna fuck it up.
– [Zubin] Yeah.
– [Daniel] But you can sense it progressively better.
– [Zubin] Yes.
– [Daniel] And you can do a lot of defining around it. But if you think your definition is it, that’s a false idol-
– [Zubin] I see.
– [Daniel] And so, how do we, how do we not have metrics that create optimization systems that then bind us, right, where now the board wants to hold the CEO to quarterly profit metrics to be able to get the money back to the shareholders who invested, so we create a law for the fiduciary responsibility for profit maximization. And now, even if you see there’s an externality, you can’t do anything about it, because you’re already bound to the metrics you’re optimizing.
– [Zubin] Yeah.
– [Daniel] So whatever set of metrics that we’re optimizing for, we have to be able to pay attention to things that are outside of those metrics that matter, and be able to progressively internalize that, which means we have to create governance structures that can orient around the ability to do that.
– [Zubin] Okay.
– [Daniel] Which also means we have to create minds that are oriented to do that, and know that other minds will notice shit that you don’t. This is where authentic diversity matters. Other minds will notice shit that you don’t, and there’s no way to notice all the things they do, which is why you have to be in dialogue.
– [Zubin] Okay. Okay. So… My field, healthcare, everything you just said points to the root of our dysfunction in what I call Health 2.0, which is our mechanized, reductionist, metric-driven, assembly-line model, medicine as machine is our current model, where we have an electronic health record that’s really an apparatus to gather data and optimize processes based on metrics that we’ve determined by fiat, or by committee, based on incomplete data, and we reduce the human health experience to a series of parts that can be optimized. What’s your hemoglobin A1C? What’s your BMI? Do you smoke, or not? Here’s your depression screening index, and start to optimize metrics and measurements, so that what used to be a right brain, left brain collusion of seeing, intuiting a whole through personal, so I’m with you, Daniel. I get a sense of who you are. I can intuit things that I could not even write in a note, but it wouldn’t matter, because what I write in the note is really to jog my memory. It’s not for anybody else. It’s not for billing, or anything, in the old days, right? But now, I don’t get to do that. I don’t get to see you holistically. I don’t get to see you in your built environment. I don’t get to see you at home. I don’t get to see you as part of your whole. I see you as part, and not only a part, but all the metrics and incentives that pay me and that govern me is that bureaucratic, left brain stuff that reduces everything to these series that I then have to click off, which means I spend more time looking at the computer, clicking off the boxes than I do actually in a we space with you having an intersubjective dialogue, where you’re not just a it, you’re an I, and you’re also a we. None of that is incentivized. So what you’re pointing at as civilization-level threat is already in fruition in healthcare, and it is a self-terminating process in healthcare, because we can’t keep spending 30% of our GDP on something that gets us the worst outcomes in the developed world, right? So when I talk about the next transcendent healthcare 3.0, it is that integral, post-cynical, holistic, dialogued, authentic diversity, everything that you just said is applicable in that space. That’s why when you talk, I just sit here going, “How do we actually make this happen in healthcare?”
– [Daniel] Okay, so if you wanna think about what is the future of healthcare, you can’t think about it separate than the future of civilization, giving that all the other aspects of civilization are affecting human health.
– [Zubin] I think that’s a theme of the entire thing, yeah.
– [Daniel] And so, can you do a good job of healthcare without addressing agricultural practices that demineralize soil?
– [Zubin] Mmm, no.
– [Daniel] No.
– [Zubin] How can you?
– [Daniel] And so, if you have ubiquitously demineralized soil, and you use NPK to get something that looks like a vegetable, but doesn’t have the same trace mineral density, and those trace minerals are important for the gazillion things, are you gonna have a bunch of subclinical deficiencies that cause, you know, problems? And similarly, now you’re gonna cover it in glyphosate, and so, now you’ll have deficiency and toxicity simultaneously, right? And can you have something like Hostess and Coca-Cola’s multi-billion dollar, multinational advertising conglomerate, you know, capacities that are optimized for, you know, addiction optimization? Like can you have that thing, and have a maximally healthy population at the same time?
– [Zubin] Which also then generates an epiphenomenon of a big pharmaceutical company that is designed to then patch up the holes created by the addictive economy of our food and ecological mess.
– [Daniel] Yeah, ’cause I can just take the statin, and I can just take the insulin, and not have to actually deal with the dietary thing. It’s similarly a short-term, parts-based, you know, et cetera.
– [Zubin] Yup.
– [Daniel] And you know, I think, the thing I think medicine has done a particularly, like modern medicine has done a particularly exceptional job at is things that have single-point cause, and where you can identify-
– [Zubin] Absolutely, when there’s one gene, or one protein, or one problem.
– [Daniel] So acute medicine is always like that, right? An acute poisoning, an acute infection, an acute injury is fuckin’ awesome.
– [Zubin] Yeah, you’re stabbed in the neck? I can fix that.
– [Daniel] And, but the things that don’t have a single cause, that have multiple causes particularly, so complex causation, causal cascades that are delayed in time, and where they’re not the same in everybody, right? It’s not even, it’s these three things that cause it, but it’s some subset of these thousands of things that end up causing. So the way I think about complex chronic disease, whether we’re talking about neurodegenerative, or autoimmune, I don’t think there is such a thing as MS, or rheumatoid arthritis as a single thing, meaning that the same causal cascade happened for everybody, right, which is why sometimes that gene is there, and sometimes it isn’t, and sometimes. I think of that more as a set of biomarkers and a set of symptoms that cluster together usefully for a diagnostic code for insurance purposes, for regulatory purposes, for pharmaceutical purposes, but where the pathophysiology leading there was different, and which is why we don’t do a good job at curing them is ’cause we don’t do a good job at assessing the causation ’cause it’s not single cause, right? It’s a complex thing to figure out, which is why I think the best stuff that’s happening in functional medicine is doing a better job. When you look at like the work Bredesen is doing with Alzheimer’s and dementia, and I think Perlmutter and others have kind of weighed in on some of that work, it’s a personalized medicine approach to how to deal with it, ’cause it’s not the exact same pathophysiology in each case, but getting really interesting results. So for instance, and I’m just playing this out hypothetically, somebody has MS, and you do a really detailed history intake, and you know, they had a head injury that maybe created subclinical TBI, meaning didn’t show up with morphological damage, but it would show up on a SPECT scan, or on a functional kind of EEG, and then they had mold toxins in their house, and then they had a really stressful divorce, and they’ve had chronic sugar kind of issues, and you get each one of those causing dysregulatory dynamics that eventually can cluster that way, but in a slightly different thing. Might be rheumatoid, autoimmune, or something else. But for someone else, it was like Lyme disease, and a certain amount of petrol toxin exposure, and some other set of things, and with more genetic precursor. So the way… So how to even think about the physiologic causal aspects of it has to involve a more complex epistemology.
– [Zubin] Yeah. And we don’t train medical students to do that. We train them in the reductionist model as much as possible, turn things into parts. So that has to change. And that’s part, it’s gotta co-evolve with civilization change. It’s gotta co-evolve with societal and environmental change. It’s gotta co-evolve with economic change, and social change, and all the other things. And yet, what I find often is society in its ills, the failures of civilization actually often end up on the doorstep of the healthcare system, and then what it does is it ripples through the actual healthcare worker population. So you know, we use terms like burnout. It’s really a kind of chronic moral injury, ’cause these are good people who went in for good reasons, and they find themselves unable to help people, because they’re in a model that doesn’t allow it, and the holistic way of looking at it is just, it’s not rewarded. It’s not paid for. You’re paid to turn things into parts, to bill by code.
– [Daniel] Well, and I mean, you know, you’ve mentioned that the relationship to patent law, and regulatory process, and fiscal incentive all co-influence. In a way it’s like it’s not some bad guy. It’s just the nature of the system does that thing.
– [Zubin] Yeah.
– [Daniel] If you wanna go through, if you have to go through Phase III clinical trials to be able to get the thing approved, and that costs like a billion dollars to do, especially when you amortize that some of them won’t make it, so for a drug company to get one that works, you have some of number that don’t work, you have to be a big fuckin’ company to be able to do that thing. And you can’t just, insurance doesn’t support general health increase. It has to be disease, so you diagnose diseases in particular ways for kind of insurance purposes. And so, that whole cluster of things means, and you’re not gonna be able to patent endogenous molecules, and increasing the things that were there in youth anywhere near as easily, so it’s synthetic molecules. What’s the chance that a deficiency in the synthetic molecule was the causal reason that you got ill, that was not a part of your evolutionary environment, or healthy people?
– [Zubin] You don’t have a sertraline deficiency? A Zoloft deficiency?
– [Daniel] It doesn’t even make sense, right? Like you would think okay, well, when the same person with the same genome was healthy, what was happening differently was a difference in endogenous molecules, right? So can we study the molecules of health, and how they proliferate? But that does require changes in the regulatory-
– [Zubin] Yeah.
– [Daniel] Epistemic, fiscal, et cetera process. And obviously, if the people are all getting fed sugar, and stress, and isolation to begin with, they’re fucked in the first place, right?
– [Zubin] Yeah.
– [Daniel] And so, when I think about the future of medicine, I think about a system that is mostly preventing illness, but it’s not preventing illness. It’s promoting health across all of the dimensions. And of course, you can’t define health. You can kind of roughly define it, but it’s that same thing, increased adaptive capacity to an increased number of stressors of any kind. You can kind of roughly define it, right? But if you are promoting that, then there’s very little hospital, right? It’s mostly decentralized, in the same way we want education to be mostly decentralized. You want a civilization that’s developing humans writ large, and not just at the school. You want a civilization that’s promoting the health of people writ large, not just at the hospital. And then, when people do come to the hospital, I would like it to be much more like a temple.
– [Zubin] Mmm, I agree. I’ve said this, the future of medicine is more shamanic than, you know, than procedural.
– [Daniel] I mean, people are gonna die there, and people are gonna be born there, and people are gonna have near-death experience, and come back. If anything should be a temple, it should be a fuckin’ hospital.
– [Zubin] Can I tell you as somebody who’s spent most of my life it feels like in a hospital, I mean, I’m a hospitalist, that’s my specialty, the place is anything but a temple in many ways, and it’s only made a temple by people’s conscious efforts where you take a second to realize you’re in a sacred space with a patient who’s dying, or with a family who’s having the worst, you know, week of their lives, or something like that. The then any space becomes sacred, but it requires the intentionality of the person there, and if that person is so beaten down, is so obsessed with the parts just to survive, then that gets taken away, and they feel that, and they manifest it biopsychosocially in their own disease.
– [Daniel] Oh, this is why I was mentioning the complexity of even looking at the physiology, right? So of course, looking at the physiology complexly, like have I, am I looking at the subclinical deficiencies, and subclinical toxicities, and subclinical pathogenicity, am I looking at the physiological structure, and their dentistry, and you know, subclinical brain patterns in their SPECT scan, and whatever, all the things, some things that were uncurable become curable when you really start to, or treatable, when you really start to go in deeper. But obviously I also have to be looking at their relationships, have to be looking at their psyche, I have to look at the environment, because does their house have mold in it? I’m not gonna necessarily get them better if I don’t deal with certain of those things. How long does it take to do that intake well? Like the doctors who I know who do a good job of this-
– [Zubin] Yeah, it’s hours.
– [Daniel] You have a five-hour intake.
– [Zubin] Yeah.
– [Daniel] And then to be able to track what is working, and what isn’t, and whatever when you can’t do perfect variable isolation, and you really have to sense it, so how does the economics of that system that requires doctors that are way the fuck more trained-
– [Zubin] Yeah.
– [Daniel] Than we currently have, and yet, with that much time investment? Well, obviously, you mostly just want to use the doctors a lot less, ’cause you have a system that is promoting health, and then if they’re also more trained, kind of to be able to do the psychospiritual interface that is happening there, it’s just a totally different vision.
– [Zubin] It’s a totally different thing, and the friends that I have who do this sort of medicine, they’re more in the concierge space. So you know, people pay a lot of money for unlimited access and they’ll do those five-hour things. They’ll look at every aspect. No stone will be unturned. They’ll come to the house, that kind of thing, and it’s unfortunately the realm of the affluent.
– [Daniel] Totally.
– [Zubin] So how do, you know, I think it’s doable. In fact, I’ve talked about it, this idea that you can actually do it, and it requires a team. It’s not just a doctor. That’s the other thing. You can’t, you need an in interdisciplinary group, where everybody’s practicing the best that they’re trained to do, and they’re feeding back to each other in a huddle, and we did that at our clinic in last Vegas. I mean, it was this, you had health coaches, who are drawn from that community, who know that population, who speak that language, who know the hopes, dreams, and fears, and could get in people’s heads, and then you have doctors, nurses, nurse practitioner, a licensed clinical social worker who does the mental health and care coordination and all those other things, and we would call them nurse innovators. Why do we reduce nurses to pill pushers and data entry clerks when they are so much more than that? They were self-selected for being nurturing types. Like what are we doing? So all of that is solvable, but you need new, you need all the things you said, the changes in incentive structures, in social structures, in governance, in regulatory stuff, all that.
– [Daniel] Yeah, so I want the number of nurses per capita to go way up.
– [Zubin] That would be nice.
– [Daniel] I want the number of doctors per capita to go up. I want the number of teachers per capita to go way up, so that the students have much more personalized attention, and I want the amount we can train them to go up, so you can really fuckin’ train those people, and really maximally talented ones, ’cause how are you gonna get a maximally talented future generation if the teachers are not the maximally talented people?
– [Zubin] Right.
– [Daniel] And so, how do you make an economic system that does that?
– [Zubin] Yeah, how do you do that?
– [Daniel] Well, in the face of the increasing technological automation of so many jobs, and yet the increasing wealth that is produced by those things, can you start to create new types of economic systems, and economic rewards to have a higher percentage of people doing these types of very, very high-touch, high-complexity things, and getting more rewarded to do so?
– [Zubin] Yeah, intuitive, creative, holistic things that require a human intelligence. That’s enabled by tech, right? So the tech actually can help offload, you know, I don’t know if it was Kasparov who said it, but he said, you know, that Big Blue, or whatever, it could beat any human alive, but you give me Big Blue as a tool, and I’ll beat any human, or any computer alive.
– [Daniel] Right.
– [Zubin] Yeah, and I think that’s true in medicine, too. We talk about AI replacing doctors, I don’t think that’s-
– [Daniel] Symbiotic intelligence between collective intelligences of humans designed well with the augmented computational capacity.
– [Zubin] Yeah, so the mechanical intelligence that computers are very good at-
– [Daniel] This is the same as what the future of democratic type systems could be is you make something better than the Fourth Estate, which is something that is Facebook-like, meaning it can curate all the information. It can do machine learning kind of parsing on it, and customize newsfeeds, but where it is optimizing for things like the relevant, comprehensive educatedness of a population for things.
– [Zubin] Social cohesion, whatever.
– [Daniel] And, but you know, the same types of generative AI, semantic generative AI, like GPT-3 that can make an infinite amount of super compelling deep fakes and fuck the epistemic commons more, can also parse, could be developed to parse massive fields of text to see what is the distribution of the values that everyone has about a particular thing to take as design constraints, or try to come up with a better proposition that would be less polarizing, and more unifying for the whole population.
– [Zubin] The opposite of what algorithms do now.
– [Daniel] And it’s not that it’s replacing human choice. It’s informing-
– [Zubin] Informing human choice, exactly-
– [Daniel] One way that you get over the data singularity of that there’s more data than anyone can pay attention to is people don’t actually need to make, they don’t need all the data to make a choice. They need the meaning, which is like a second, or third derivative on the data, which can I have the AI process a fuckton of data to be able to parse where there’s incorrect stuff, where there’s high confidence stuff, and put it into the form that is decision informing, and then what is relevant for decision informing is stuff that people can actually keep up with?
– [Zubin] And that is actually the application of big data in medicine, too. We don’t need, the data is too much. There’s tons of data. You could have wearables. You could have all this other garbage. Doctors are terrified of that, because what the hell are we gonna do with that data? What we need is to parse it down into bits that then we can find meaning, and use our intuition, which is, that’s why we do 10,000 hours of training, right, is because that becomes an unconscious intuitive thing. And only humans right now have the inductive ability really to do that, and I actually had Federico Faggin on the show. He’s the guy who co-developed the world’s first commercial microprocessor with Intel, and he wrote a book on like, look, consciousness, machines will never have internal state awareness because of the following types of limitations in their intelligence, and he makes these very, you know, absolute statements, which you could argue with, but it was really, you know, this idea that humans do a type of intelligence that’s almost rooted in the infinite. It’s almost rooted in the Tao, in the sublime, in emptiness, whatever spiritual term you wanna call. It’s bringing forth this thing that the computers can’t do, but that doesn’t mean we can’t use them as a tool. But we’ve devalued that aspect of humanity now, especially in medicine.
– [Daniel] When you look at AlphaGo’s relationship to people with chess, and then with Go, and then with video games, and you continue to forecast where that goes, can AI, in any way that you model how it’s gonna unfold, beat humans at all definable games? Totally. And that means military games, that means economic games. Can someone who can’t do AI high-speed trading compete with someone who can? No, right? And this is where the finitely definable game theory is not the only thing worth doing, right? And that thing, of course, we do lose to AIs, and so, then it’s a race of who can deploy the AIs better, or faster, but that is a race that the endpoints of are not nice.
– [Zubin] Mmm-hmm.
– [Daniel] And so, but this is a fundamental question. We’re talking about the redesign of medicine. You think every sector has to be redesigned. We’re talking about the redesign of the Fourth Estate, and of democratic process, the redesign of education. What is the purpose of education in a post-technological automation, post-ubiquitous AI world, where most of the things that we were teaching kids to do have all been automated?
– [Zubin] Like math, et cetera-
– [Daniel] Well, obviously, to be able to answer what is the future of education is what is the role of the human in that world? And then, ’cause education is to develop humans that have the capacities to do that thing. And I think it happens to be that the thing that humans do uniquely well relative to at least the narrow AIs that we currently have are also things that humans have more intrinsic incentive to do. The things that are easier to automate are more rote, usually, you have less intrinsic incentive on. And so, you have to extrinsically incent people, so you get market dynamics of all these types. As you’re able to still create wealth, automate a lot of those things, are the remaining things things that have more intrinsic incentive? Plus, you have more total resource to allocate. There’s some interesting things in there that are possible.
– [Zubin] Interesting and optimistic. Interesting and optimistic. I think, you know, that there’s a future is very bright kind of vision there that, where humans are freed to actually be the kind of selves they actually want to be, you know, their own authentic self. You know, I gotta say, man, we’ve had a 2 1/2-hour conversation, and I’m looking at my card here, and I’m worried it’s gonna give up the ghost as we reach the end of the capacity of that second card. Man, I could talk to you for like 17 hours on any one of these things, and I still had other stuff that we probably won’t be able to reach this time, but-
– [Daniel] Let’s do another time.
– [Zubin] Would you do that?
– [Daniel] Yeah, totally.
– [Zubin] ‘Cause you were kind enough to swing outta your way today, and-
– [Daniel] This was a blast. I love the type of public education you’re doing. I think it’s so, so important. And so, I was really happy to come. Super enjoyed this conversation, and yeah, there’s a bunch of other places we could go. I’m curious to hear what is interesting, what people have questions on, disagree with from this piece. And… And I… Yeah, I love the fact that you’re publicly educating and learning publicly simultaneously, right, and a medical expert who is actually learning, updating their position, and trying to orient people towards synthesis of something that is otherwise culture-breaking. And I really wanna see you do that thing where you get the experts in the, because it needs to happen.
– [Zubin] You’re very kind, we’re gonna try to do that. I have to say this, Daniel, you are one of the people that I can consistently look at online in your resources to try to learn from, and you’ve been instrumental in my own understanding of these sort of civilization-level threats, and how to even apply them down to my own field of medicine, but even beyond that. So I’m looking forward to continuing the dialogue. I’d love to get, you know, Tristan and Zak-
– [Daniel] Yeah.
– [Zubin] On the show, and talk to them about these things, too, because again-
– [Daniel] I’ll send you intros.
– [Zubin] Oh, that’d be awesome, and where can we find you online?
– [Daniel] There’s a project that we started a little while ago. It’s still in beta phase, and we have some super interesting things we’ll be doing with it in this next year. It’s called The Consilience Project, consilienceproject.org, and I have a personal blog that mostly just has a list of podcasts up at civilizationemerging.com.
– [Zubin] Okay, great, I’ll put both those links in, and yeah, those interviews are fantastic, man. I think you did like a four-hour thing with Lex Fridman. Man, I tell you, next time, we’re gonna do six hours just to beat Lex-
– [Daniel] He spent the first half hour on UFOs, which I didn’t expect we were gonna go there, but yeah, Lex is cool.
– [Zubin] I heard that, I remember thinking, “You actually know more about UFOs than you should.” Like where did you even learn about that?
– [Daniel] Well, I mean, just think about what a consequential topic it is depending upon what is true.
– [Zubin] The most.
– [Daniel] So I think anything that is hard epistemically to figure out if it’s true or not, but super consequential, and differentially consequential one way, or the other is just innately really interesting.
– [Zubin] Yeah, there you go. Well, see, that’s what I love about you is your , your curiosity drives you to go from first principles to establish really interesting thought patterns on stuff, and you’re really a unique guy. Thank you. Guys, you know what to do. Share the show. If you really like what we do, I was talking about Locals earlier. I’m gonna pitch it one more time, zdoggmd.locals.com, and you can join us there for free, or become a supporter for like five bucks a month, and then we have this little internal tribal dialogue that’s looking for consilience, synthesis, alt-middle, integral, all the things, post-cynical thought, so thank you.
– [Daniel] I haven’t been there yet. I’ve heard about it, I’ll come find you on Locals.
– [Zubin] Find me and-
– [Daniel] Be a supporter.
– [Zubin] And then poke holes in all the problems with Locals, ’cause I know you’ll find ’em.
– [Daniel] Cool, thank you, Zubin-
– [Zubin] All right, brother, take care.