After several years of effort, graduate students getting paid for research or teaching at the University of Chicago joined a labor union. Because they couldn’t form a union de novo but had to join an existing one, they became dues-paying members of the United Electrical, Radio, and Machine Workers of America, Local 11o3. This enables graduate students who get paid as research assistants or for teaching to engage in collective bargaining and to strike against the University if the bargaining reaches an impasse. The University of Chicago opposed the students’ efforts to join a union, but the University can’t prevent it.
You can see why the University would oppose unionization, for often research assistantships and teaching are regarded by universities as training rather than jobs; and if there were a strike, it would cripple research at the school as well teaching itself, for in some courses graduate teaching assistants do much of the work. But the students prevailed. I didn’t have much of a dog in this fight, except that I thought the possibility of strikes was a dangerous byproduct of unionizing.
But joining the union came with an unexpected downside: unions can take political and ideological positions, and as a member of one (qualified students are required to join and pay union dues), you implicitly sign on to those positions. And you may not want to do that. In the case at hand, the Union has taken pro-Palestinian positions, and some students, especially Jewish ones, don’t want to sign on to these positions. So a group called “Graduate Students for Academic Freedom” has sued the union, alleging that the union makes them engage in implicit endorsement of the union’s positions. That, they claim, is Constitutionally prohibited “compelled speech.” You may have already guessed that this involves the war in Gaza.
Click the screenshot to read. I’ve put an excerpt below
An excerpt by Baude (there’s more at the site):
A few years ago, the graduate students at the University of Chicago, where I teach, formed a legally recognized labor union. Last year, that union expanded to include the law school, at least to the extent that law students engage in paid work such as providing research assistance. Law students who want to work as research assistants must either join the union and pay dues, or else pay agency fees to the union even if they do not join. Either way, giving money to the union is a legally required condition of working as a research assistant.
Graduate Students United at the University of Chicago, the union, engages in political speech that some law students find quite objectionable. The union is part of the United Electrical, Radio and Mine Workers of America, which also engages in political speech. For some law students, having to give money to these causes is an unacceptable condition of employment.
Yesterday, a group of those students, Graduate Students for Academic Freedom, filed a federal lawsuit against the union arguing that the arrangement violates their First Amendment rights under cases like Janus v. AFSCME, which holds that compelled agency fees “violate[] the free speech rights of nonmembers by compelling them to subsidize private speech on matters of substantial public concern.”
You can read the complaint here, and the motion for a preliminary injunction here.
This is from the complaint, so you can see what the students are objecting to. Bolding is mine:
INTRODUCTION
1. Graduate students at the University of Chicago have been put to the choice of halting their academic pursuits, or funding antisemitism. That is unlawful.
2. In the Winter of 2023, graduate students at Chicago voted to unionize, and are now exclusively represented by GSU-UE—a local of United Electrical (UE).
3. That is a real problem. Among much else, UE has a long history of antisemitism. It is an outspoken proponent of the movement to “Boycott, Divest, and Sanction” Israel (BDS)—something so clearly antisemitic that both Joe Biden and Donald Trump have condemned it as such. Indeed, for years, the union has had a consuming fixation with the world’s only Jewish state—a fixation peppered with all-too-common rhetoric. UE has charged Israel with “occupying” Palestine; has branded Israel an “apartheid regime”; and has accused Israel of committing “ethnic cleansing.”
4. GSU-UE is cut from the same cloth. On campus, it has not only echoed its parent union’s rhetoric, but has added to it. It took pains to publicly “reaffirm” its commitment to BDS just one week after the October 7 terrorist attacks. And it has joined the “UChicago United for Palestine Coalition,” which gained notoriety for its protest encampment and hostile takeover of the Institute of Politics. Through it, GSU-UE has joined calls to “honor the martyrs”; fight against campus “Zionists”; resist “pigs” (i.e., police); “liberate” Palestine from the “River to the Sea,” and by “any means necessary”; and “bring the intifada home.” Jimmy Hoffa’s union this is not.
5. Nonetheless, under a recent collective bargaining agreement extracted by the GSU-UE, graduate students at the University must now either become dues-paying members of the union, or pay it an equivalent “agency fee,” as a condition of continuing their work as teaching assistants, research assistants, or similar positions.
6. Constitutionally speaking, that is not kosher. The union’s ability to obtain agency fees from nonconsenting students is the direct product of federal law—i.e., it involves governmental action, subject to the First Amendment. But if GSU-UE wishes to wield such federally backed power, it must accept the responsibility that comes with it; it cannot use a government-backed cudgel, outside constitutional constraint. And if the First Amendment means anything, it means students cannot be compelled to fund a group they find abhorrent as the price of continuing their work.
7. The stories of Plaintiff’s members lay bare the stakes that are at issue here. One member is an Israeli; another a proud Jew with family fighting in Israel; and some are graduate students simply horrified by the union’s antisemitism—as well as its other (to put it mildly) controversial political positions, which reach well beyond collective bargaining to virtually every hot-button subject (e.g., abortion, affirmative action, policing, gender ideology, even the judiciary). Although members come from different backgrounds, none can stomach sending a penny to this union.
Now I’m no lawyer (I only play one on television), but it seems that this is indeed compelled speech: Jewish students are being forced to endorse policies that can be regarded as anti-Israel and likely as antisemitic. Nor do I know the solution, unless it’s to ditch the agreement that qualified students should have to join the union. It seems to me, in my ignorance, that unions, like universities, should be “institutionally neutral”: they should not take political or ideological positions that have nothing to do with the working of the union itself.
The First Amendment itself prohibits compelled speech. As a free-speech site says,
The compelled speech doctrine sets out the principle that the government cannot force an individual or group to support certain expression. Thus, the First Amendment not only limits the government from punishing a person for his speech, it also prevents the government from punishing a person for refusing to articulate, advocate, or adhere to the government’s approved messages.
The Supreme Court’s decision in West Virginia State Board of Education v. Barnette (1943) is the classic example of the compelled speech doctrine at work.
In this case, the Court ruled that a state cannot force children to stand, salute the flag, and recite the Pledge of Allegiance. The justices held that school children who are Jehovah’s Witnesses, for religious reasons, had a First Amendment right not to recite the Pledge of Allegiance or salute the U.S. flag.
In oft-cited language, Justice Robert H. Jackson asserted, “If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein.”
The problem, of course, is that this doctrine applies only to the government punishing people for their speech or for refusing to adhere to approved governmental speech. Since schools are arms of the government, they can’t be forced, as noted above, to salute the flag or recite the Pledge of Allegiance. But the plaintiffs argue that the power of unions ultimately derives from the government—from legislative acts. From the complaint:
80. Step one asks: “Whether the claimed constitutional deprivation resulted from the exercise of a right or privilege having its source in state authority.” Edmonson v. Leesville Concrete Co., 500 U.S. 614, 620 (1991). And the answer here is yes: GSU-UE’s extraction of fees is the product of its legal power to bind all workers to a single collective bargaining agreement, as their sole and exclusive representative.
81. The Supreme Court has said as much: The “collection of fees from nonmembers is authorized by an act of legislative grace—one that we have termed ‘unusual’ and ‘extraordinary.’” Knox v. SEIU, Local 1000, 567 U.S. 298, 313-14 (2012).
This case, then, would seem to be an important one, for it could decide whether unions in general can indeed take political positions that are seen as implicitly endorsed by their members. And, of course, unions regularly endorse political candidates.
The fate of this case thus depends on whether the compelled speech involved in being a union member is construed as being connected with government. As I said, I think unions, representing a broad spectrum of views among their members, should be politically neutral even if there’s no governmental connection. Compelled speech is chilled speech and inhibits free speech; this is why our university has its institutional neutrality embodied in the Kalven report.
But if the court does find that union activities occur under the aegis of government, then it’s game over: the plaintiffs win. We shall see.
Reader Su called my attention to the AI website below, which you can join simply by giving your email and a password. And, of course, I couldn’t resist. Click on the link I just gave you, or on the screenshot below. The figures you can talk to (ask them anything!) include Charles Darwin, Florence Nightingale, Genghis Khan, Socrates, Aristotle, Isaac Newton, Galileo, Albert Einstein, Marie Curie, Catherine the Great, Alexander the Great, Alan Turing, Sigmund Freud, and Leonardo da Vinci. Clearly there are hours of fun to be had, and much time to be wasted. I asked a few of them questions, with the answers reproduced below. You’ll have to click on the conversations to enlarge them.
I started with Darwin, of course, and asked him about speciation. He clearly knew much more about species and speciation than he discussed in The Origin. His definition of species at the bottom is spot on. Click to enlarge:
I asked Freud if he was a fraud, and of course he was evasive:
Genghis Khan denied being a mass murderer:
I asked Socrates the Euthphro question, and he gave a very good answer!:
I asked Marie Curie how she felt about her work contributing to the atomic bomb. She gave a boilerplate answer, but it shows she (or AI) would make a good politician:
Asked about whether Gandhi was mistaken in insisting that India remain a country of simple farming and crafts, and not embrace modern technology, he equivocated.
This gives uis a chance to revise history: to find out what can be, unburdened by what has been. Perhaps those of you of a philosophy bent would like to interact with philosophers of the past. In the meantime, I better leave this site alone.
Often times the answer to a binary question is “yes”. Is artificial intelligence (AI) a powerful and quickly advancing tool or is it overhyped? Yes. Are opiates useful medicines or dangerous drugs? Yes. Is Elon Musk a technological visionary or an eccentric opportunist? This is because the world is usually more complex and nuanced than our false dichotomy or false choice simplistic thinking. People and things can contain disparate and seemingly contradictory traits – they can be two things at the same time.
This was therefore my immediate reaction to the question – are AI companions a potentially healthy and useful phenomenon, or are they weird and harmful? First let me address a core neuropsychological question underlying this issue – how effective are chatbot companions, for just companionship, or for counseling, or even as romantic partners? The bottom line is that the research consistently shows that they are very effective.
This is likely a consequence of how human brains are typically wired to function. Neurologically speaking, we do not distinguish between something that acts alive and something that is alive. Our brains have a category for things out there in the world that psychologists term “agents”, things that are acting on their own volition. There is a separate category for everything else, inanimate objects. There are literally different pathways in the brain for dealing with these two categories, agents and non-agents. Our brains also tend to overall the agent category, and really only require that things move in a way that suggest agency (moving in a non-inertial frame, for example). Perhaps this makes evolutionary sense. We need to know, adaptively, what things out there might be acting on their own agenda. Does that thing over there want to eat me, or is it just a branch blowing in the wind.
Humans are also intensely social animals, and a large part of our brains are dedicated to social functions. Again, we tend to overcall what is a social agent in our world. We easily attribute emotion to cartoons, or inanimate objects that seem to be expressing emotions. Now that we have technology that can essentially fake human agency and emotion, this can hack into our evolved algorithms which never had to make a distinction between real and fake agents.
In short, if something acts like a person, we treat it like a person. This extends to our pets as well. So – do AI chatbots act like a real person? Sure, and they are getting better at it fast. It doesn’t matter if we consciously know the entity we are chatting with is an AI, that knowledge does not alter the pathways in our brain. We still process the conversation like a social interaction. What’s the potential good and bad here?
Let’s start with the good. We already have research showing that AI chatbots can be effective at providing some basic counseling. They have many potential advantages. They are good listeners, and are infinitely patient and attentive. They can adapt to the questions, personality, and style of the person they are chatting with, and remember prior information. They are good at reflecting, which is a basic component of therapy. People feel like they form a therapeutic alliance with these chatbots. They can also provide a judgement-free and completely private environment in which people can reflect on whatever issues they are dealing with. They can provide positive affirmation, while also challenging the person to confront important issues. At least these can provide a first line of defense, cheaply and readily available.
Therapeutic relationships easily morph into personal or even romantic ones, in fact this is always a very real risk for human counselors (a process called transferance). So, why wouldn’t this also happen with AI therapists, and in fact can be programmed to happen (a feature rather than a bug). All the advantages carry over – AI romantic partners can adapt to your personality, and have all the qualities you may want in a partner. They provide companionship that can lessen loneliness and be fulfilling in many ways. l
What about the sexual component? Indicators so far are that this can be very fulfilling as well. I am not saying that anything is a real replacement for a mutually consenting physical relationship with another person. But as a second choice, it can have value. The most important sex organ, as they say, is the brain. We respond to erotic stimuli and imagery, and sex chatting can be exciting and even fulfilling to some degree. This likely varies from person to person, as does the ability to fantasize, but for some sexual encounters happening entirely in the mind can be intense. I will leave for another day what happens when we pair AI with robotics, and for now limit the discussion to AI alone. The in-between case is like Blade Runner 2049, where an AI girlfriend was paired with a hologram. We don’t have this tech today, but AI can be paired with pictures and animation.
What is the potential downside? That depends on how these apps are used. As a supplement to the full range of normal human interactions, there is likely little downside. It just extends our experience. But there are at least two potential types of problems here – dependence on AI relationships getting in the way of human relationships, and nurturing our worst instincts rather than developing relationship skills.
The first issue mainly applies to people who may find social relationship difficult for various reasons (but could apply to most people to some extent). AI companions may be an easy solution, but the fear is that it would reduce the incentive to work on whatever issues make human relationships difficult, and reduce the motivation to do the hard work of finding and building relationships. We may choose the easy path, especially as functionality improves, rather than doing the hard work.
But the second issue, to me, is the bigger threat. AI companions can become like cheesecake – optimized to appeal to our desires, rather than being good for us. While there will likely be “health food” AI options developed, market forces will likely favor the “junk food” variety. AI companions, for example, may cater to our desires and our egos, make no demands on us, have no issues of their own we would need to deal with, and would essentially give everything and take nothing. In short, they could spoil us for real human relationships. How long will it be before some frustrated person shouts in the middle of an argument, “why aren’t you more like my AI girlfriend/boyfriend?” This means we may not have to build the skills necessary to be in a successful relationship, which often requires that we give a lot of ourselves, think of other people, put the needs of others above our own, compromise, and work out some of our issues. ]
This concept is not new. The 1974 movie, based on the 1972 book, The Stepford Wives, deals with a small Connecticut town where the men all replace their wives with robot replicas that are perfectly subservient. This has become a popular sci-fi theme, as it touches, I think, on this basic concept of having a relationship that is 100% about you and not having to do all the hard work of thinking about the needs of the other person.
The concern goes beyond the “Stepford Wife” manifestation – what if chatbot companions could either be exploited, or even are deliberately optimized, to cater to – darker – impulses? What are the implications of being in a relationship with an AI child, or slave? Would it be OK to be abusive to your AI companion? What if they “liked” it? Do they get a safe word? Would this provide a safe outlet for people with dark impulses, or nurture those impulses (preliminary evidence suggests it may be the latter). Would this be analogous to roleplaying, which can be useful in therapy but also can have risks?
In the end, whether or not AI companions are a net positive or negative depends upon how they are developed and used, and I suspect we will see the entire spectrum from very good and useful to creepy and harmful. Either way, they are now a part of our world.
The post AI Companions – Good or Bad? first appeared on NeuroLogica Blog.