Menu Close

Category: Newsletter (page 10 of 28)

Archives of the Exolymph email newsletter.

This website was archived on July 20, 2019. It is frozen in time on that date.
Exolymph creator Sonya Mann's active website is Sonya, Supposedly.

Therapy Bots and Nondisclosure Agreements

Two empty chairs. Image via R. Crap Mariner.

Image via R. Crap Mariner.

Let’s talk about therapy bots. I don’t want to list every therapy bot that’s ever existed — and there are a few — so I’ll just trust you to Google “therapy bots” if you’re looking for a survey of the efforts so far. Instead I want to discuss the next-gen tech. There are ethical quandaries.

If (when) effective therapy bots come onto the market, it will be a miracle. Note the word “effective”. Maybe it’ll be 3D facial models in VR, and machine learning for the backend, but it might be some manifestation I can’t come up with. Doesn’t really matter.

They have to actually help people deal with their angst and self-loathing and grief and resentment, but any therapy bots that are able to do that will do a tremendous amount of good. Not because I think they’ll be more skilled than human therapists — who knows — but because they’ll be more broadly available.

Software is an order of magnitude cheaper than human employees, so currently underserved demographics may have greater access to professional mental healthcare than they ever have before. Obviously the situation for rich people will still be better, but it’s preferable to be a poor person with a smartphone in a world where rich people have laptops than it is to be a poor person without a smartphone in a world where no one has a computer of any size.

Here’s the thing. Consider the data-retention policies of the companies that own the therapy bots. Of course all the processing power and raw data will live in the cloud. Will access to that information be governed by the same strict nondisclosure laws as human therapists? To what extent will HIPAA and equivalent non-USA privacy requirements apply?

Now, I don’t know about you, but if my current Homo sapiens therapist asked if she could record audio of our sessions, I would say no. I’m usually pretty blasé about privacy, and I’m somewhat open about being mentally ill, but the actual content of my conversations with my therapist is very serious to me. I trust her, but I don’t trust technology. All kinds of companies get breached.

Information on anyone else’s computer — that includes the cloud, which is really just a rented datacenter somewhere — is information that you don’t control, and information that you don’t control has a way of going places you don’t expect it to.

Here’s something I guarantee would happen: An employee at a therapy bot company has a spouse who uses the service. That employee is abusive. They access their spouse’s session data. What happens next? Who is held responsible?

I’m not saying that therapy bots are an inherently bad idea, or that the inevitable harm to individuals would outweigh the benefits to lots of other individuals. I’m saying that we have a hard enough time with sensitive data as it is. And I believe that collateral damage is a bug, not a feature.


Great comments on /r/DarkFuturology.

Neon Exploration: A Quick Review of Hypercage

Craig Lea Gordon asked me to review his cyberpunk novella Hypercage. The book reminds me a bit of Black Mirror — reality with a techno-antisocial twist. Here’s a passage from the beginning that shows what I mean:

His wife’s face [formed] a frown of danger across the restaurant table. He was in trouble.

“What the hell do you think you’re doing?” she demanded.

Dave paused as a set of notifications slotted up the side of his HUD, glowing vividly against the romantic lighting of their corner booth.

+200 Session XP
+100 XP Mission flare
+1000 XP Enemy craft destroyed
-1500 XP Mission objectives failed
Net score: -100 XP
Daily XP total: 71,265

Minus 100 experience points? And only a hundred XP for mission flare! That was fucking bollocks. He slammed the table with his fist. The two glasses of red wine wobbled uncertainly from the impact.

Hypercage suffers from a little too much emphasis on the futuristic tech at the expense of character development, but the story rollicks along. It won’t bore you. If you need the protagonist to be likable in order to enjoy a story, skip this one. Antihero fans will be fine.

I got sucked in once the main character discovered a VR plugin that was supposed to let his brain multitask, so he could carry on a normal life while maintaining his addiction to the in-universe Eve Online clone.

Artwork by Surian Soosay.

Artwork by Surian Soosay.

Artwork by Surian Soosay.

Artwork by Surian Soosay.

That’s all. Go on home now. Or go download Hypercage.

“There is an error with my dependencies”

Exolymph reader Set Hallström, AKA Sakrecoer, sent me an original song called “Dependency” — these are the lyrics:

sudo apt update
sudo apt upgrade

There is an error with my dependencies
Consultd 1.2 and emplyomentd 1.70
I cannot pay my rent without their libraries
And to install i need to share my salary

Where do i fit in this society?
The more i look and the less i see
They want no robots nor do they want me.
work is a point in the agenda of the party

sudo apt update
sudo apt upgrade

My liver isn’t black market worthy
And my master degree from a street university
My ambitions are low and i am debt free
There is no room in the industry for robots like me

Don’t get me wrong i would also like to be
Installed and running and compatible with society
But i am running a different library
Because my kernel is still libre and free.

All unedited. Another thing — Craig Lea Gordon’s novella Hypercage is available on Amazon for zero dollars. Review coming soon, but I wanted to let you know now!

Head Transplants & Idealism

Photo by Newtown grafitti [sic].

Photo by Newtown grafitti [sic].

I can’t stop thinking about “The Audacious Plan to Save This Man’s Life by Transplanting His Head”. It’s a fascinating article. Both the patient and his doctors seem delusionally optimistic. Long story short, we don’t have the technological capability to do this even semi-safely.

On the other hand, progress happens when people push the envelope, not when people plod along, dissatisfied with the status quo but willing to let it change incrementally over the course of many decades. That’s why the world needs activists and idealists — they make a lot of noise and force shifts in public sentiment, at which point the pragmatists start reworking their plans.

The irony is that I’m the second type of person, one of the plodders. I’m a cynic and an incrementalist, especially when it comes to politics. For example, I’ve written before about my frustrations with anarchists and libertarians, even though I share many of their goals and principles. I just don’t have much faith in visions of utopia — even though utopians are the ones who push all of us toward slightly less awful realities.

I think the disconnect is that I expect people to be selfish, and I’m skeptical that we can figure out a general resource-allocation method that’s better than markets. (Certain things like healthcare are b0rked by markets, but in some ways even healthcare is over-regulated.) I’m not sure I believe in a world without hunger, or rape, or corruption, or any number of bad things.

But hey, maybe, if the overzealous doctors get approval from the Chinese government, soon enough I’ll believe in a world where quadriplegics and people with degenerative diseases can get head transplants. Perhaps not successful head transplants — that will come later.

Unconvincing Androids

Kids called them Hollow Heads. When the androids hit the market, the term “Bunnies” almost caught on instead, inspired by the ear-like antenna prongs, but the alliteration of “Hollow Heads” was more appealing to the first generation who interacted with the machines.

Artwork by NicoTag.

Artwork by NicoTag.

Besides, the increasingly powerful House Rabbit Society lobbied against the trend conflating their beloved pets with humanoid robots. After dogs died out in the 2030s, rabbits got a lot more popular, and their owners weren’t keen on having their endearments coopted.

One of the original engineers admitted on a virtustream that his design riffed off of an old film, back when screens were dominant. He said it was called Chappie. No one else seemed to remember this movie and soon the engineer disappeared from spokesmanship.

The other big innovation, besides the distinctive “ears” (which didn’t actually have any audio-processing capabilities), was to make the androids slightly insectoid. Just a little something in their structure. Babies found them unsettling, and this was judged to be good. Faces without heads — you could relate to them, but you’d never mistake them for human.

At first the market responded better to realistic androids. Boutiques liked them, as did hotels. But after a couple of high-profile impersonations splashed all over the virtustreams, along with that one abduction, the Bureau of Consumer Protection pushed Congress to regulate the new machines. They codified the Hollow Head design, and soon a thousand variations were being imported from China.

It wasn’t that the androids had been going rogue. Their owners programmed them for nefarious purposes. The nice thing about the Hollow Heads is that they really stood out, so you wouldn’t have them going around signing contracts without being detected.

Of course, the US Empire’s sphere of influence was only so big. Plenty of factories in Russia kept churning out androids with full craniums, some also featuring convincingly visible pores.

The Strategic Subjects List

Detail of a satirical magazine cover for All Cops Are Beautiful, created by Krzysztof Nowak.

Detail of a satirical magazine cover created by Krzysztof Nowak.

United States policing is full of newspeak, the euphemistic language that governments use to reframe their control of citizens. Take “officer-involved shooting”, a much-maligned term that police departments and then news organizations use to flatten legitimate self-defense and extrajudicial executions into the same type of incident.

And now, in the age of algorithms, we have Chicago’s “Strategic Subjects List”:

Spearheaded by the Chicago Police Department in collaboration with the Illinois Institute of Technology, the pilot project uses an algorithm to rank and identify people most likely to be perpetrators or victims of gun violence based on data points like prior narcotics arrests, gang affiliation and age at the time of last arrest. An experiment in what is known as “predictive policing,” the algorithm initially identified 426 people whom police say they’ve targeted with preventative social services. […]

A recently published study by the RAND Corporation, a think tank that focuses on defense, found that using the list didn’t help the Chicago Police Department keep its subjects away from violent crime. Neither were they more likely to receive social services. The only noticeable difference it made was that people on the list ended up arrested more often.

WOW, WHAT A WEIRD COINCIDENCE! The “strategic subjects” on the list were subjected, strategically, to increased police attention, and I’m sure they were all thrilled by the Chicago Police Department’s interest in their welfare.

Less than fifty years ago, the Chicago Police Department literally tortured black men in order to coerce “confessions”. None of that is euphemism. A cattle prod to the genitals — but maybe it ought to be called “officer-involved agony”?

I get so worked up about language because language itself can function as a predictive model. The words people use shape how they think, and thoughts have some kind of impact on actions. Naturally, the CPD officers who carried out the torture called their victims the N-word.

I wonder what proportion of the Strategic Subjects List is black? Given “data points like prior narcotics arrests [and] gang affiliation”, an algorithm can spit out the legacy of 245 years of legal slavery more efficiently than a human. But torture in Chicago is still handcrafted by red-blooded American men. Trump would be proud.

Guillotineplex

Are you familiar with the killing machine? It does what it sounds like. And unfortunately the machine is indiscriminate — the humans who operate and maintain it choose the machine’s targets, but they don’t always do a good job. So the machine terminates murderers, but even more often its victims are innocent.

Such is the way of a killing machine. It’s just a machine. Objects — or assemblages of objects — can’t be responsible or culpable for their “actions”.

Photo by mel.

Photo by mel.

I could be talking about a few different machines. I could be talking about US drones in the Middle East, or about the United States Armed Forces as a larger whole. I could be talking about lethal injection setups, or about our entire criminal “justice” system.

In a literal sense machines are different from bureaucracies. But regarding human organizations as machines can be a useful mental model. When we zoom out to that perspective, becomes obvious how little the good intentions of the participants matter.

A cog in a machine can be very well-made and run smoothly, interacting admirably with the parts next to it. But if the overarching design of the machine is to enable corrupt operators to execute their enemies, well…

Program or Be Programmed; UX or Be UX’d

Artwork by GLAS-8.

Artwork by GLAS-8.

Aboniks posted this blockbuster comment on artificial consciousness in the Cyberpunk Futurism chat group:

Pondering how the digital brain-in-a-jar might practice good mental hygiene.

You’d need a hardwired system of I/O and R/W restrictions in place to protect the core data that made up the “youness”. A “youness ROM”, perhaps. If that analogy holds up, then maybe my grandmother’s case is akin to a software overlay suddenly failing. Firmware crash. But I’m not convinced brains are so amenable to simple analogy. The processing and the storage that goes on in our heads doesn’t seem to be modular in the same sense that our digital tools are.

Anyway, if your software and hardware (however they’re arranged and designed) are capable of perfect simulation then they are equally capable of perfect deception. There may be a difference between simulation and deception, but I can’t think of a way to put it that doesn’t seem… forced.

So, for the rest of your “life”, your entire experience is UX, in the tech-bro sense of the word.

“Program or be programmed,” as Rushkoff would say. If you’re not the UX designer, you’re hopelessly vulnerable. Who are the UX designers, then? Who decides where the experience stops and the “youness” starts? Who defines that border to protect you? Another Zuckerberg running a perpetual game of three-card Monte with the privacy policy?

Maybe not an individual, but something more monolithic, ending in “SA” or “Inc”? Will there be an equivalent of Snowden or Assange to expose their profit-driven compromises in our storage facility fail-safes and leak news of government interference in the development process of our gullibility drivers?

Will we be allowed to believe them?

(Lightly edited for readability.)


She wondered where the expression “surf the net” came from. Of course Sarah knew what surfing was, but why “net”? Did it used to have something to do with catching fish?

She was fourteen and relatively popular. Her classmates though she was nice and mildly funny. Sarah knew because of the survey reports.

Harry, the troublemaker, would shoot caustic messages into their class channel. “Who surveys the surveyors?” he asked.

Finally Allison answered — Allison was more popular than Sarah, so she looked up to her — “You are so fucking boring. Get off your history kick and live in the real world, Harry. Like the rest of us. No one cares what the surveyors think. We saw them for like five minutes.”

He shot back, “You know those surveys determine your job trajectory, right?”

Allison told him she thought the test-writers knew what they were doing. Harry called her a regime sycophant. Then the teacher stepped in and reminded them that hostility was inappropriate for this venue.

Four years later, at eighteen, Sarah wondered what ended up happening to Harry. But only for a couple of minutes. Then she went back to work.

Pyongyang Is Totes Awesome, Bro

This is a superlative cyberpunk headline if I ever saw one: “YouTube Stars Are Now Being Used for North Korean Propaganda”. A vlogger named Louis Cole uploaded a series of videos in which he gallivanted around the DPRK, with nary a mention of the country’s atrocious human rights record.

Per the article: “Cole has, so far, not really made mention of any of that, choosing instead to go for a light tone, oohing and ahhing over abundant food in a country ravaged by hunger.” I mean, to be fair, famine is a bummer, right? What brand would want to sponsor a vlogger who talks about that stuff?

Louis Cole on Twitter, @funforlouis. Glad you're enjoying yourself, buddy.

Louis Cole on Twitter, @funforlouis. Glad you’re enjoying yourself, buddy.

The "beautiful military guide at the war museum", praised by Louis Cole on Instagram.

The “beautiful military guide at the war museum”, praised by Louis Cole on Instagram.

In the article, Richard Lawson wrote incisively:

The more you watch Cole’s videos from North Korea, the more you wonder if he’s plainly ignorant to the plight of many people in the country, or if he’s willingly doing an alarmingly thorough job of carrying water for Kim Jong Un’s regime — not really caring what the implications are, because, hey, cool trip.

Or maybe it’s something else. Maybe this is a surreal extreme of the unthinking, vacuous new-niceness that occupies a large amount of YouTube territory, content creators so determined to deliver an upbeat, brand-friendly message that the uncomfortable truths of the world — personal and political — go mind-bogglingly, witlessly ignored.

Louis Cole’s manager insisted that he wasn’t being paid by the DPRK and didn’t intend to “gloss over or dismiss any negative issues that plague the country”. Like Lawson, I believe that.

I don’t think this vlogger was gleefully pressing “upload” and thinking, “Haha, now’s my chance to bolster the image of an oppressive dictator!” On his Twitter account — which he appears to run himself — Cole said, “its a tiny step & gesture of peace. waving a finger & isolating the country even more fuels division” [sic].

Here’s the thing, though — Cole may be goodhearted and he may mean all the best, but it doesn’t make a difference. It doesn’t change the fact that he was shilling for the carefully curated trip that a brutal regime presented to him. And it doesn’t change the incentives that he and his professional brethren are responding to.

I’ve spent a lot of time reading about the new “influencer economy”, as it’s being called — for instance, Elspeth Reeves’ fascinating article on teen lip-syncing sensations — because I’m a media geek and that’s a substantial portion of the future of media. So I follow a lot of these people on social media.

The “influencers” who are raking in cash are relentlessly positive. Big companies are risk-averse — they don’t want to be associated with anything negative — and big companies are the ones cutting checks to YouTube stars, Instagram stars, etc. Interpersonal drama pops up now and then, but any political questions are avoided.

Who can blame them? Gotta make a buck and late capitalism only offers so many options…

(I still blame them.)

© 2019 Exolymph. All rights reserved.

Theme by Anders Norén.