Motivation is about how you currently feel towards doing some action. But why do you feel it towards some things, and not others? In this section we meander through six different categories for undercurrents of motivation. The first is political, and carries no weight for me, but me venting about my hatred for inside-view politics.
The first is Individualistic and Political. This type has a desire for control, rank and power.
“Great cities attract ambitious people. You can sense it when you walk around one. In a hundred subtle ways, the city sends you a message: you could do more; you should try harder. The surprising thing is how different these messages can be. New York’s dominant message is to make more money. Boston’s is to be smarter. And as much as they respect brains in Silicon Valley, the message the Valley sends is: you should be more powerful. What matters in Silicon Valley is how much effect you have on the world.” (Paul Graham)
It seems coherent, if you don’t step outside the echo chamber, that The Bay attracts the power hungry. Companies transcending governments as the eminent vehicles for change and influence on capital structures that matter. “If companies shut down, the stock market would collapse. If the government shuts down, nothing happens and we all move on, because it just doesn’t matter.” (PG)
These political pieces within people base their self-worth on the quality of their verbal opinions. Endlessly talking about the impact of ‘resource alignment', and other buzzwords to describe the vast empires beneath both them and those they worship.
They focus on useful self-definitions and social skills that build illusory tribal currents with those higher on the greasy corporate pole. Implying false intimacy, remembering first names, and cultivating subtle skills of effective deference.
When this is your focus, your umwelt shrinks around the baseless social reality that is created around you. It consistently threatens your ability to be honest with yourself. To see the world clearly. To have some basic integrity as a person. But some people, those with good intentions and more, should be encourage to endure this political theater of the absurd because there is no other life so filled with consequence.
Taking a step back: we all have this political motivator to some degree, and in some ways it is useful. Control can help create a secure emotional base around which to explore the world. A sufficient amount of social status can help secure pieces of identity, so that other pieces can be focused on. And power helps create both situational influence and the construction of intentional direction.
But if you incessantly hunger for more power, you will be left feeling weak and afraid. Worship it, and your threatened ego will need ever more power over others to keep the fear at bay.
This second motivator has two sides. The first is internal facing: feeding on the primal need of social belonging within our tribe. The second is external facing: the desire to help others and solve social problems.
But The Bay crushes cultures that actually try to solve social problems (beyond the benefits of comforting employee morality, and public relations). This is because there is a competitive advantage to undercutting values, which makes authentically altruistic corporate aspirations not ‘pragmatic’. Solving social problems is therefor a burden when one is confined to the incentives of an isolated business. This constrained moral circle on the super-organic level prevents collaboration, and promotes the tragedy of the commons.
So, as the tech elite, we should ignore this ‘save the world’ motivator, in favor of comforting our egos with a thick lather of social approval within ‘high tribes’. And who knows, maybe our aspirational technologies, with their wizard-like generalizability, will someday satiate all of our addictions. After first, of course, creating them.
Taking a step back: Our desire to fixate on tribal-style social survival is an unfortunate trait that lingers from our evolutionary past , and it is a trait that is no longer needed. But it still exists, and the insecurity of belonging creates a fear that controls most of people’s lives. For example, labels of social approval often incentivize people to do meaningless jobs and live unfulfilling lives that they wouldn’t otherwise consider taking part in.
This lacking of social security creates a fear that favors conformity, in hopes of exploiting others for praise and approval. Sometimes we may focus on winning the approbation of a wider society, but often we only focus on our ‘Puppet Masters’. That is, a person or group of people whose opinion matters so much to you that they’re essentially running your life.
This ‘social motivator’ is a detriment on the freedom of those without sufficient social wealth.
And when it comes to social wealth, the poor get stuck in a cyclical scarcity trap, while the rich just get richer. Those who are sufficiently rich are said to be ‘secure in their social standing’. You can often pick these people out by their thoughtful, yet fractious, beliefs; wakes of constructive confrontation springing from their independence of mind.
But more importantly, these people are better able to find their Authentic Voice, an introspective decision-making process that is less muddled by implicit peer pressures, and is formed by experience and reflection. This carries a nuanced contrast with a decision-making process that relies on the outside world having strongly held opinions. An Authentic Voice instead prefers to only use the outside world to learn and gather information, as an aid to eventual decisions. This approach leads to an identity that isn’t built on approbation, and therefor prevents inevitable criticism and rejection from being soul crushing.
This motivator is about the desire to learn for the sake of knowledge. Some believe that knowledge must have a purpose other than itself, or it collapses into infinite recursion. And so we should ground ourselves into the roots of empiricism, to yield the fruits of prediction. That is, theoretical knowledge should be grounded in science.
Why should we be motivated towards Science? I believe the only answer is genuine curiosity: a burning itch to fill the deprivation we experience when we identify and focus on a gap in our knowledge. That is, curiosity seeks to annihilate itself. This process is unruly, subjecting everything to the possible laceration of a smart question nobody has yet thought to ask. It prefers diversions, unplanned excursion, impulsive left turns. In short, curiosity is deviant, which makes intellectuals defiant. mrawr.
But the theoretically defiant are not powerful because knowledge is not power. It wants power. Its used most by the groups that are (or feel) weak. In its extreme, ethics is no more than an attempt by the weak to gain power over the strong. This is why the well-being of the many has always been the alibi of tyrants.
Reify it with technology. Use knowledge as fuel. Then you get that new kind of power. The kind that will soon trump financial influence. This appeals to those who are at least curious about a future of enduring progress towards some higher end.
It is true that almost everyone is interested in returns on investment, but most all frames are severely limited to the near future.
Utility = Rate x Amount / Delay
where most society is blind of their inability for delayed gratification (any utility with a large delay), and where all market-responses are unconfident in the length of their lifetime (hungry for exploiting the now).
I propose that a Utilitarian motive becomes unlocked when aimed at farther futured goals whose outcome increases far more rapidly than the delay of its realization. I use the term ‘quixotic utility’ to communicate the kind of goal where the Delay is massive, but is still outweighed by the expected Rate x Amount.
But its called quixotic for a reason. These utility explosions at the limit are too distant to be assured. And given how easy it is to generate new mechanisms, accounts, theories, and abstractions, we need to vet them to find those that are useful; those that have worked thus far. If something is too far away from what we currently understand, then there is nothing that we know a lot about near to it. Staying close enough to what we already know, for sufficient testing, is necessary for Vetting Value (Rate x Amount).
To be sure, a healthy conversation around quixotic utility should be supported (ex: AI safety). It should be supported to the extent of sufficient saturation. One should then propagate to what is currently possible (to stay relevant, this is defined as cutting-edge Machine Learning). One can then propagate again, through the constraints of the previous paragraph to view safety concerns that are closer to the current reality (ex: power dynamics resulting from information asymmetries with real-time global satellite imagery). This means that ML that is close to implementation (ex: satellite imagery) should be tied to a relevant ethics committee.
So as an ML person, how far into the future should you optimize for a Utility Boom? Is this question even useful in practice? In physics, research takes on average forty years to be realized within engineering. ML is probably too young and diverse to have an equivalent forecast. It may be fair to say that there are different ways of creating context around R&D hypothesis, such as Sequence vs Cluster Thinking.
But it may be useful to have an explicit tight-rope about which the Übermensch must walk towards strong AI. In the least it makes for more concrete ground from which to evaluate the stability and value of our trajectory.
Reach too far, and you fall into over-saturated quixotic utility, with high expectations that a black swan will make your research irrelevant before any of its value is realized.
Don’t reach enough, and you’ll be stuck cleaning data and optimizing ads, while we slowly fall prey to entropic gravity and the increasing instability of murphy’s law.
Reach just far enough, and our last Goldilocks, through The Blessing of Abstraction, will absolve us with favorable autopoietic lift.
 “[In hunter gatherer times], being part of a tribe was critical to survival. A tribe meant food and protection in a time when neither was easy to come by. So for your Great2,000 Grandfather, almost nothing in the world was more important than being accepted by his fellow tribe members, especially those in positions of authority. Fitting in with those around him and pleasing those above him meant he could stay in the tribe, and about the worst nightmare he could imagine would be people in the tribe starting to whisper about how annoying or unproductive or weird he was—because if enough people disapproved of him, his ranking within the tribe would drop, and if it got really bad, he’d be kicked out altogether and left for dead. He also knew that if he ever embarrassed himself by pursuing a girl in the tribe and being rejected, she’d tell the other girls about it—not only would he have blown his chance with that girl, but he might never have a mate at all now because every girl that would ever be in his life knew about his lame, failed attempt. Being socially accepted was everything.
Because of this, humans evolved an over-the-top obsession with what others thought of them—a craving for social approval and admiration, and a paralyzing fear of being disliked.” (WaitButWhy)
 I think utilitarian goals with small Delays focus on Resource Alignment, whereas utilitarian goals with large Delays focus on Inferential Alignment. Inferential alignment aims to constructively construct dense spaces of intelligent associative models. Its followers may work harder in the space because the idealistic pedestal alleviates certain kinds of paralyses. I’m honestly curious as to why people wouldn’t dive deep into the hype at the top of the pyramid.