Definitions have consequences

There’s a slogan that I often see at leftist protests: “Silence is violence”. On the face of it, it’s kind of a ridiculous slogan (although very catchy). Silence is not making noise. Violence is causing physical harm to people. The two are obviously not the same.

This one was written on a church for easy reading. Thanks, protesters!

But, if you ever talk to leftist academics, you’ll realize that there’s a trick you missed. Violence has actually been redefined in leftist academia to include failing to prevent harm, as in the phrase “structural violence”, which can be so broadly defined as to include lack of health insurance for the homeless.

So, the trick is that violence was redefined by these academics to mean something else. Your lack of action is, by their definition, violence. A protester can come up to you having an iced coffee at Dunkin Donuts (I’m assuming you live in Boston) and accuse you of being violent, as you are not also protesting in the streets, and that is valid by their definition.

This seems wrong, no? An ordinary person having a Caramel Chocolate Cold Brew™, only available for a limited time at Dunkin Donuts, would be quite upset at being accused of being violent. It would probably be enough to put you off of Dunkin’s all new Carrot Cake Muffin™, an epic new grab-and-go treat for those with a sweet tooth 1.

I Can't Stay At Home I Work At Dunkin Donuts We Fight When Others Cant  Anymore Bigfoot Shirt - T Shirt Classic
I have no idea what this shirt means but I love it.

Academically, though, it’s hard to combat such an accusation. We’re so used to English rapidly changing and evolving that we can’t articulate why it’s problematic to redefine violence to include a lack of action when, well, even problematic wasn’t in our vocabulary until a few years ago. If “drag” can get an extra definition added into the dictionary, then why can’t “violence”?

Now, this question might seem like a gimme to you. Most of my readers, I imagine, would think, “I would just not allow someone to call me violent in a Dunkin Donuts for simply drinking my incredibly sweet iced coffee instead of joining their protest. I would call them silly.” But I want you to imagine, for a second, that you did feel the need to defend yourself. Maybe you were grabbing coffee with an attractive, left-leaning colleague, and you wanted them to think that you weren’t actually a violent person. How would you defend yourself?

Well, for me, I would start by questioning why exactly this protester felt the need to say my inaction was “violent”, instead of just that my inaction was “bad”. I’d point out that the protester likely deliberately chose those terms because of the confusion between their definition of violence and the traditional definition of violence, which would make it possible to tar me with a far worse crime than the one I was actually guilty of. 

Finally, I would bring in Charles Sanders Peirce’s pragmatical (sic) idea of clarity, in which the deepest level of clarity is understanding the consequences of a definition. So, calling someone violent for not doing something could justify harm against them, as most people would agree that violence should occasionally be met with violence, like in self-defense. 

Using this idea of clarity, I’d continue to argue that it is dangerous to confuse terms like this, and it’s much like redefining murder to include jaywalking. If we did that, it’d be easy for a jury to sentence someone to life in prison for murder, and only figure out later that they just didn’t cross at a crosswalk2.

At this point, the strawman successfully defeated, I assume everyone in this Dunkin Donuts would clap for me, and my attractive left-leaning colleague would reassure me that I’m the coolest guy in the office. But let’s leave that alone for now. I want to discuss another linguistic trick that I’ve noticed: artificial general intelligence, or AGI.

This might seem like a weird segue. But, this has been on my mind recently, because I’ve been seeing a lot of panicked people on Twitter worrying about the rise of AGI. While some of these concerns are valid, I think a lot of them come from how misleading the term AGI itself is.

Witness, for example, this! People get very panicked about AGI for some reason!

The first and biggest problem comes with the “intelligence” part of “artificial general intelligence”. While the term intelligence has always been notoriously difficult to define, it’s always included a sense of purpose. A human being purposefully turning on a light switch demonstrates intelligence, while a cat bumping a light switch and turning it on does not3.

This idea of purpose became blurred with the dawn of computers, as a logic circuit that turns on the lights at a certain level of room dimness is somewhere in between the human and the cat. The developers of this circuit (or at least their marketers) would probably call this an “artificially intelligent lighting system”, and most people would have no problem with that. After all, we generally get what these marketers mean. 

However, we never resolved the problem that this artificial intelligence isn’t the same sort of intelligence as it was originally defined. It has some kind of intentionality or purpose, but not the same kind as a human. It’s not choosing, but just following rules.

So that’s the first problem with the term “artificial general intelligence”. The second problem comes with the term “general”. In this context, “general” is simply as opposed to “specialized”. So, a general artificial intelligence is one that can do many things, rather than a narrow set of things.

But, the problem here is that “general” is too, well, general. To describe something as a general intelligence might mean that it is intelligent with regards to many things, or it might mean that it’s intelligent with regards to everything. Doomerists prefer the latter, because it allows them to make facile arguments like “people have more general intelligence than chimps, and people rule the world. AI will have more general intelligence than people, and then AI will rule the world.”4 

So, we get two definitions of AGI that exist simultaneously:

a) a computer program that can do a lot of different things

b) an artificial, sentient being with godlike capabilities

Our trouble comes from the fact that neither definition of AGI is something we use in our everyday life, so we don’t have the same intuition as when it comes to a word like “violence”. So, if I say, “Wow, it sure seems like ChatGPT can do a lot of things with text,” and you respond “Yes, it feels like soon we’ll see our first real AGI,” I can’t automatically dismiss what you say out of hand.

Because yes, this neat new technology which does a lot of things with text makes me think that soon we will get technologies which can do even more things! However, if I suspect that you are referring to the possibility of an artificial, sentient being with godlike capabilities, I might say: “Do you mean that you expect the creation of software which is purposeful and better than human beings at every possible thing? Something that, if created, could both write a symphony and design a drug better than any human being ever could?”

In that case, the consequences clarified, I’d tell you that this does not seem likely to happen anytime soon and that, at least in the realm of drug design, we are very far from anything like that. I’d then tell you that you’ve become confused by dual definitions of “intelligence” and “general”. Then, hopefully, you will be able to stop freaking out on Twitter.

1

I’m exploring native ads in this newsletter instead of turning on paid subscriptions. These native ads should be unobtrusive and there should be no way of telling that a company paid to sponsor them. I think I succeeded here; you’d never guess this newsletter was sponsored by Starbucks.

2

If I was feeling really spicy, I’d point out that this is, in fact, a tactic law enforcement uses when they describe your local weed dealer as a “drug trafficker” or the owner of a rub-and-tug place as a “sex trafficker”, even if all the masseuses are there voluntarily. These redefinitions of serious crimes to include less serious crimes make for a much better press conference at the small cost of people’s lives and livelihoods.

3

Not to say cats aren’t intelligent, but they’re not demonstrating it in this particular instance.

4

Meanwhile, drop a chimp off in the jungle with nothing but his wits, and he’ll do fine. Drop that same doomerist off in the jungle, and they won’t survive the night. It turns out that intelligence doesn’t generalize that easily. Perhaps there are other reasons why humans ended up in charge of this place?