Menu Close

Category: Newsletter (page 15 of 28)

Archives of the Exolymph email newsletter.

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Something Something Blockchain

Yay, We Don’t Need Politics Anymore!

The DAO's logo, grabbed from their website.

The DAO’s logo, grabbed from their website.

I wanted to resist writing about The DAO — that stands for “decentralized autonomous organization” — but after going through my notes from this past week’s reading, I realized that I can’t avoid it.

The reason I wanted to steer clear is that everyone else has already said it better, but maybe you don’t subscribe to their newsletters. Besides, who else will address the cyberpunk angle?

Bloomberg columnist Matt Levine covered The DAO with delightful snark:

“One of the great joys of our modern age, with its rapid advances in financial technology, is examining the latest innovation to try to figure out what centuries-old idea has been dressed up in cryptographical mystification.”

To summarize aggressively, The DAO wants to crowdsource an entire company, which will sort of act as a venture capital partnership, dispensing ETH, a bitcoin-like cryptocurrency. You can read plenty more about their structure and setup on their website. The DAO’s main differentiators are “smart contracts” and, as the name suggests, decentralized governance:

“The ETH held by The DAO will never be centrally managed. DAO Token Holders are able to vote on important decisions relating to the management of The DAO, including the power to redistribute its ETH amongst themselves.”

Cryptocurrency Art Gallery by Namecoin.

Cryptocurrency Art Gallery by Namecoin.

The cryptocurrency crowd fascinates me because many of them seem to think they can opt out of normal human power structures, or somehow use code to avoid disputes. And I think that’s… well, impossible. (Maybe I am strawmanning egregiously, in which case I hope a cryptocurrency enthusiast or garden-variety libertarian will email me to point it out.) As I’ve written before:

“There is a reason why centralization happens over and over again in human history. We didn’t invent the Code of Hammurabi out of the blue. Monarchy did not develop randomly, and republics require executive branches.”

Direct democracy is a terrible system; I would go so far as to say it’s unworkable. Does anyone endorse mob rule? And centralized power is an oft-repeated pattern because it’s efficient. Furthermore, we have courts and the like because they’re useful — because the need for arbitration arises frequently despite the existence of contracts. Going back to Matt Levine’s article:

“The reason that ‘law and jurisdiction’ come into play is that sometimes stuff happens that is not addressed with perfect clarity in the contract. Sometimes the parties need to renegotiate to address something not specifically anticipated in the contract. Sometimes they can’t agree, and need an outside adjudicator to decide what should happen. And sometimes the project affects people who never signed the contract in the first place, but who have a claim nevertheless.”

And as business analyst Ben Thompson wrote in his “Bitcoin and Diversity” essay:

“I can certainly see the allure of a system that seeks to take all decision-making authority out of the hands of individuals: it’s math! […] If humans made the rules, then appealing to the rules can never be non-political. Indeed, it’s arguably worse, because an appeal to ‘rules’ forecloses debate on the real world effects of said rules.”

Lots of people don’t want to do the hard things. They don’t want to admit that decisions always carry tradeoffs, and they don’t want to negotiate messy human disagreements. But a world without those hard things is fairyland — nothing more than a nice dream.

As we continue to integrate computing into our daily lives, our legal system, and our financial system, we will have to keep grappling with human fallibility — especially when we delude ourselves into thinking we can escape it.


Update circa June 19: I was tempted to write about The DAO again, since it’s been “hacked” (sort of) and a “thief” (sort of) absconded with $50 million (USD value). However, a lot of other people have already published variations of what I wanted to say. The drama is still unfolding — /r/ethereum is a decent place to keep track — so I can’t point you to a canonical writeup, but Matt Levine’s new analysis is both cogent and funny. Also this Hacker News comment is smart.

The Mad Monk in a New Century

Cyber portrait of Rasputin. Artwork by ReclusiveChicken.

Artwork by ReclusiveChicken.

Some men gain their reputation and influence through sheer charisma, perhaps with a dash of self-engineered notoriety:

“I realized, of course, that a lot of the talk about him was petty, foolish invention, but nonetheless I felt there was something real behind all these tales, that they sprang from some weird, genuine, living source. […] After all, what didn’t they say about Rasputin? He was a hypnotist and a mesmerist, at once a flagellant and a lustful satyr, both a saint and a man possessed by demons. […] With the help of prayer and hypnotic suggestion he was, apparently, directing our military strategy.” — Teffi (Nadezhda Lokhvitskaya)

Now imagine if Rasputin had deep learning at his disposal — a supercomputer laden with neural nets and various arcane algorithms. What would Rasputin do with Big Data™? Perhaps the Rasputin raised on video games and fast food would be entirely different from the Rasputin who rose up from the Siberian peasantry.

Which rulers would a modern Rasputin seek to enchant? Russia has fallen from its once formidable greatness, and I don’t think Vladimir Putin is as gullible as the Tsar was. China is the obvious choice, but Xi Jinping similarly seems too savvy. Somehow I doubt that Rasputin, the charlatan Mad Monk, could gain much traction in a first-class military power these days. Would he be drawn to the turmoil of postcolonial Africa?

Maybe Rasputin would be a pseudonymous hacker, frequenting cryptocurrency collectives and illicit forums. Would that kind of power suffice? Would he be willing to undo corporate and governmental infrastructure without receiving credit? Would he have the talent for it, anyway? Not everyone can become a programmer. Maybe he’d flourish on Wall Street instead.

What I’m really wondering is whether Rasputin’s grand influence was a result of being in the right place at the right time. Would he have been important no matter when he was born? You can ask this question about any historical figure, of course, but I want to ask it about Rasputin because he’s cloaked in mysticism. I can imagine him drawing a literal dark cloak around himself, shielding his body from suspicions that he was just a regular human.

You’ve probably heard the rumors about how hard Rasputin was to kill. Who is the Mad Monk’s modern counterpart? Which person who wields the proverbial power behind the throne will be very hard to disappear when it comes time for a coup?

Health, Happiness, 8asdf6a7f57

Photo taken in Oakland, California.

Photo taken in Oakland, California.

I was nervous in all the cliché ways — sweaty palms, rubbing them on my thighs, slightly flushed and slightly sweaty. Everyone said the procedure wouldn’t hurt. But I didn’t know of any person who had gotten it reversed. So this was permanent. It wouldn’t help to dream of regaining ownership.

The recruiter gave me a kind glance over her desk. “Are you ready, dear?” She seemed configured to look grandmotherly, complete with the faint cookie smell. I felt a little suspicious, wondering if she was a bioengineered multi-stack human, placed here to comfort me into signing myself over. Or maybe her personality was just a happy coincidence for the corporation.

I needed the money. That’s how these things always happen. People used to join the United States Army because the education and income were worth risking your life. I heard about that from old Boomers on street corners. When I was a kid, they still hung around.

I never liked their greyness, the frozen-in-time feel of them. Boomers rocked back and forth on their haunches, shooting the shit with each other, and you couldn’t help but listen while waiting for the crosswalk. My parents’ parents, the generation birthed by the “Greatest Generation”; the generation that caused all of this anyway. Fuck ’em.

The recruiter pushed a tablet and stylus toward me. She nodded with a smile, just like a benevolent automaton would. I swiped through the forms slowly, trying to read everything but feeling my eyes glance off the denser patches of legalese. What could they say in these documents that would deter me, anyway?

I needed the money.

The press called them “oblivion jobs” — liberal columnists thought they were evil and conservative columnists called them an honest day’s work. Snapchat blew up with the debates for a while. Then other liberals jumped in and pointed out that this new solution was better than fully conscious drudgery.

Besides, the second faction of leftists argued, it was condescending to confiscate options from the poor. Let them choose. We chose, in droves, because it paid decently. Finally, something that paid decently! I was a holdout, actually. Paranoia and an irregular news habit kept me away from the recruiting offices until almost everyone else I knew had signed up.

The value proposition was straightforward: Sell your time and labor, like any job. But you don’t have to be awake while it’s happening. Rent out your body and accept long stretches of blankness. Would you rather be aware of the monotonous physical labor — hollowing out arcology units, adjusting every terminal for the dirt it was lodged in? Or would you rather wake up ten hours later, never having processed how you spent the time?

The commercials said it would be like going straight from breakfast to watching TV with a beer in hand. And you’d stay in shape, hooray!

The hardware-wetware combo behind this was complex and poorly understood, controversial among engineers as well as pundits. Roboticists were exasperated at first, not used to being second best, but eventually they resigned themselves to the new status quo. Machines were physically more capable, but they couldn’t match the sensory intuition of oblivion workers.

Everyone who told me the procedure wouldn’t hurt was right. And soon my employment situation felt familiar, of course. It was only strange for a couple of weeks to “wake up” with an aching back, nearly ready to go back to bed again.

Means & Ends of AI

Adam Elkus wrote an extremely long essay about some of the ethical quandaries raised by the development of artificial intelligence(s). In it he commented:

“The AI values community is beginning to take shape around the notion that the system can learn representations of values from relatively unstructured interactions with the environment. Which then opens the other can of worms of how the system can be biased to learn the ‘correct’ messages and ignore the incorrect ones.”

He is talking about unsupervised machine learning as it pertains to cultural assumptions. Furthermore, Elkus wrote:

“[A]ny kind of technically engineered system is a product of the social context that it is embedded within. Computers act in relatively complex ways to fulfill human needs and desires and are products of human knowledge and social grounding.”

I agree with this! Computers — and second-order products like software — are tools built by humans for human purposes. And yet this subject is most interesting when we consider how things might change when computers have the capacity to transcend human purposes.

Some people — Elkus perhaps included — scoff this possibility off as a pipe dream with no scientific basis. Perhaps the more salient inquiry is whether we can properly encode “human purposes” in the first place, and who gets to define “human purposes”, and whether those aims can be adjusted later. If a machine can learn from itself and its past experiences (so to speak), starting over with a clean slate becomes trickier.

I want to tie this quandary to a parallel phenomenon. In an article that I saw shared frequently this weekend, Google’s former design ethicist Tristan Harris (also billed as a product philosopher — dude has the best job titles) wrote of tech companies:

“They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. […] By shaping the menus we pick from, technology hijacks the way we perceive our choices and replaces them new ones. But the closer we pay attention to the options we’re given, the more we’ll notice when they don’t actually align with our true needs.”

Similarly, tech companies get to determine the parameters and “motivations” of artificially intelligent programs’ behavior. We mere users aren’t given the opportunity to ask, “What if the computer used different data analysis methods? What if the algorithm was optimized for something other than marketing conversion rates?” In other words: “What if ‘human purposes’ weren’t treated as synonymous with ‘business goals’?”

Realistically, this will never happen, just like the former design ethicist’s idea of an “FDA for Tech” is ludicrous. Platforms’ and users’ needs don’t align perfectly, but they align well enough to create tremendous economic value, and that’s probably as good as the system can get.

Struggling Against Systems

“In some ways the Puritans seem to have taken the classic dystopian bargain — give up all freedom and individuality and art, and you can have a perfect society without crime or violence or inequality.” — Scott Alexander

“By preying on the modern necessity to stay connected, governments can reduce our dignity to something like that of tagged animals, the primary difference being that we paid for the tags and they’re in our pockets.” — Edward Snowden

If the Puritans pursued the “classic dystopian bargain”, maybe we’re pursuing the dystopian bargain nouveau. It’s not quite the opposite, but not far from it. We’ve given up all freedom by embracing ideological tribalism and accepting ubiquitous infotainment as a panacea, instead of agitating for the rights nominally promised by our two-faced governments. Who elected Janus? Why haven’t we kicked him out of office?

Graphic via The Intercept.

Graphic via The Intercept.

The rise of mass surveillance, enabled by SIGINT technology, is a good proxy for the government’s lack of respect for its citizens.

Sometimes my commentary on these issues can come across as anti-privacy or maybe pro-surveillance, because lots of the paranoid hacker-types I hang out with overestimate their threat models. So yes, I do want people to lighten up, and I’m pretty pessimistic about the prospect of “normies” using Tor and PGP.

But on the other hand, it’s terrifying that the NSA vacuums up all the information in the world. (International friends: your governments do it too, and they collaborate with the NSA when possible.) It’s terrifying that encryption is under fire. It’s terrifying that people get nigh disappeared in prison. I don’t know what to do with this world.

Maybe the answer is nihilism.

Foozles + Whizgigs + Dopamine

“Humans are actually extremely good at certain types of data processing. Especially when there are only few data points available. Computers fail with proper decision making when they lack data. Humans often actually don’t.” — Martin Weigert on his blog Meshed Society

Weigert is referring to intuition. In a metaphorical way, human minds function like unsupervised machine learning algorithms. We absorb data — experiences and anecdotes — and we spit out predictions and decisions. We define the problem space based on the inputs we encounter and define the set of acceptable answers based on the reactions we get from the world.

There’s no guarantee of accuracy, or even of usefulness. It’s just a system of inputs and outputs that bounce against the given parameters. And it’s always in flux — we iterate toward a moving reward state, eager to feel sated in a way that a computer could never understand. In a way that we can never actually achieve. (What is this “contentment” you speak of?)

Computer memory space. Photo by Steve Jurvetson.

Photo by Steve Jurvetson.

Kate Losse wrote in reference to the whole Facebook “Trending Topics” debacle:

“no choice a human or business makes when constructing an algorithm is in fact ‘neutral,’ it is simply what that human or business finds to be most valuable to them.”

That’s the reward state. Have you generated a result that is judged to be valuable? Have a dopamine hit. Have some money. Have all the accoutrements of capitalist success. Have a wife and a car and two-point-five kids and keep absorbing data and keep spitting back opinions and actions. If you deviate from the norms that we’ve collectively evolved to prize, then your dopamine machine will be disabled.

It’s only a matter of time until we make this relationship more explicit, right? Your job regulating the production of foozles and whizgigs will require brain stem and cortical access. You can be zapped with fear or drowned in pleasure whenever it suits the suits.

Fecal Inquiries: DIY Medicine & DIY Ethics

Josiah Zayner is a self-described biohacker who used to work at NASA and now runs a company that sells home science kits. He’s suffered from painful gastrointestinal problems for years, so he decided to conduct a DIY refresh of his gut bacteria. There’s no need to dance around this: Zayner consumed his friend’s shit in pill form. (For more background info, check out The Fecal Transplant Foundation’s website.) Arielle Duhaime-Ross wrote a fascinating article about this radical effort.

Photo by Vjeran Pavic for The Verge.

Photo by Vjeran Pavic for The Verge.

Zayner wasn’t deterred by the medical professionals who unanimously thought his idea was terrible:

“Of the nine biology and medical professionals [the journalist] spoke with, every single one stressed that Zayner’s experiment could make him very sick. Zayner vowed not to analyze his donor’s feces — it contradicted, he said, the DIY ethos of the experiment and could make the project seem less accessible to laypeople. As a result, he was putting himself at risk for hepatitis, rotavirus, and a whole slew of other pathogens and parasites. And his decision to take antibiotics to kill his own bacteria before the transplant was risky, said OpenBiome’s Osman. Some people carry C. diff without any symptoms; if Zayner was one those people, disrupting the balance in his gut could enable C. diff to flourish — and the consequences of that could be life-threatening.”

Spoiler alert: Zayner is okay so far, and he reports that his gastrointestinal problems have improved. (Anecdotal evidence, to be sure, but not entirely meaningless.) His stated motivation of encouraging science as a means of liberation is admirable:

“I just want people to be able to be free. Free to explore this reality. I think all other freedoms come from this one. Freedom to have access to information and tools and resources. It is hard to oppress people without controlling what information they possess.”

Zayner’s point is reinforced by the panicky behavior of dictators — see Turkish president Erdogan’s current absurd (but terrifying) attempts to curb criticism of his regime.

Zayner also did a Reddit AMA after the article was published. One commenter critiqued Zayner’s “vigilante” approach because it didn’t contribute to the scientific community’s aggregate knowledge:

“The fact that the general populace think your results mean something doesn’t mean anything. They don’t have the scientific background to draw conclusions from your results. […] The fact that you cannot publish your results [in a peer-reviewed journal] means prominent members of the scientific community disagree with what you have done, and are saying your results cannot be trusted.”

Zayner disagreed with this characterization, responding:

“Just because the authority is not doing something, or does not believe in something, does not mean it is not possible or doesn’t exist. Every scientific discovery has ended decades of ‘actual scientists’ being dead wrong.”

Personally, I think we need both. We need university-sanctioned studies and we need biohackers. We need work that establishes ground rules and work that pushes limits. Luckily, both will persist.

Brief Thoughts on Androids, Cyborgs, & Humanity

Horror drawing of androids by Apo Xen.

Artwork by Apo Xen.

“Most remarkable is David 8’s increased emotional capacity, which allows him to seamlessly adapt to any human encounter. Weyland has also fine-tuned David 8’s expression mapping sensors, engendering a strong sense of trust in 96% of users.” — Weyland Industries (the company from Prometheus)

Horror drawing of androids by Apo Xen.

Artwork by Apo Xen.

Androids are a parody of humanity. We design them in our image. We give them — and their software equivalents — our names. Sarah is the theoretical lifeguard bot, and Charles is the helpful museum attendant. Ava is the manic pixie dream bot turned indifferent assassin. David is the sociopathic HAL 9000 redux. Their personalities are stereotypes constructed around particular job roles.

We build and (fantasize about building) human-looking machines that are programmed to ape us, often replicating our weaknesses as well as our strengths. But of course androids cannot feel what we feel. They can’t even see what we see, because computers don’t identify images in the same way the human brain does. Layers of mathematical analysis learn to recognize pixel patterns, but they can be fooled by tweaks that seem silly to human eyes.

Cyborgs, on the other hand, are not so much an imitation of humanity as a gritty extension of it. (In case you’re not familiar with the distinction, an android is fully robotic, whereas a cyborg is a flesh human augmented by high technology.) We already live in a world of cyborgs — prosthetics and IUDsheart monitors that can be hacked — and the more speculative DIY experiments aim to add a sixth sense to our arsenal.

I feel much more comfortable with cyborgs than I do with androids. Why is that? Is it because contemporary androids are still mired in the uncanny valley? When there isn’t as much of a disconnect between robotics, machine learning, and genuine human behavior, maybe I won’t be able to tell the difference.

Or maybe androids will become the norm, because why give birth via vaginal canal when you can avoid it? Cyborgs will stand out as antiquated oddities, still based on blood and bones instead of upgrading to silicon and steel. Parents will generate their infant’s mind based on a random data seed, then tweak the variables until the result is acceptable.

Uncertainty + Risk + Trying to Make Money

“A thing I had long suspected — the world’s absurdity — became obvious to me. I suddenly felt unbelievably free, and the freedom itself was an indication of that absurdity. […] Cautiously, clumsily, I loaded the revolver, then turned off the light. The thought of death, which had once so frightened me, was now an intimate and simple affair.” — The Eye by Vladimir Nabokov

Uncertainty blazing at the end of a tunnel.

Photo by darkday.

The wide variety of possible futures poses a problem. It’s very simple: uncertainty. Uncertainty creates risk, and it’s stressful. By definition the true future is unknown, and therefore scary. You can attempt to prepare for the future, but you can’t really prepare for it — because you’ll prepare for the future you expect, which will differ significantly from the future that actually happens.

Technology analyst Ben Thompson likes to say that the worst-case scenario for a five-year plan is that you achieve your goals, but the ground has shifted under you in the meantime. The ground is constantly shifting underneath us. It’s easy to project consumer gadget trends, but it’s not so easy to call the election (Hillary will win) or what will happen in Syria (no guesses here). Will self-driving cars take over the roads in ten years or fifteen? You can’t cash out if you don’t buy and sell the stocks at the right time. Will universal basic income ever be applied beyond a few one-off experiments?

The way to deal with uncertainty is not to try to eliminate it — that’s a futile task. Uncertainty and risk are inherent features of reality. The way to deal with uncertainty is to absorb it. Make it part of your being and your reactions. Come to peace with the fact that life is cruelly unpredictable. Embrace the instability of your circumstances, and practice honing your reflexes. You’ll need to jump at some point.


I’ve also been wondering whether I can make money from Exolymph. I don’t want to charge for the newsletter directly, and advertising won’t be lucrative unless I can gain several multiples of the subscriber base I have now. Sorry, I know it’s crass to talk about #monetizing, but I would love to be able to support myself by writing quasi sci-fi thinkpieces and story snippets. However, in order to convince people to give you money, you need to satisfy a market need — in other words, to solve a problem.

What problem most torments the kind of person who subscribes to Exolymph? Based on the anxious discussions about Donald Trump in the chat group — as well as automation and surveillance and the everything-industrial complex — the core worry that captivates us is uncertainty. I can write blasé dismissals of the utility of obsessing over uncertainty, but of course I’m still preoccupied with it. It seems that I’m not alone.

Because I’m a writer, I immediately thought, “Okay, I’ll write a guide to embracing uncertainty.” But that’s a silly idea. I’m not an expert, and besides, such a guide already exists. Neither am I a researcher, nor a scientist, nor a successful investor, nor any of those people who have either studied or experienced uncertainty to the degree that they can talk about it with anything approaching, well, certainty.

All I’m equipped to do is explore. Find out how people dealt with economic upheaval in the past. Dive into the long Wikipedia list of cognitive biases. I’m throwing ideas at the wall, but I still don’t know if anyone would pay for this.

Would you pay $5 per month for a weekly offshoot newsletter that delivered meditations and investigations on functioning and thriving in a world of uncertainty? If no one can be bothered to respond to this email, I can safely assume that no one would bother to pay actual money.

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.