miscellaneous-docs/newsletter7.org

13 KiB

Outline

  • Machine learning outside LLMs

    • We've been getting stuck on LLMs because they're the big thing
    • Machine learning is more general than that
    • Machine learning is more than just neural networks
    • Explanation that modern machine learning is about finding ways
  • Link roundup:

    • stable-diffusion and the american smile
    • department of ed recommendations about generative AI in the classroom

Text

Hi everyone, Another week another summary of the news wrapped inside an essay. Or maybe that's the other way 'round. Or maybe it's like three essays in a trenchcoat. You, the people, should decide.

So first here's a bit of a link roundup of interesting, odd, and upsetting things in recent days.

First, a kind of fascinating consequence of how large image generation models such as stable-diffusion are trained

https://medium.com/@socialcreature/ai-and-the-american-smile-76d23a0fbfaf

as you might be able to guess from the URL, this is a piece about how these models have a tendency to give people a very particular American cultural conception of how one should smile in photos. I'll admit this hadn't even really occurred to me, despite being a person who literally never smiles in photos by American standards, but it makes perfect sense. Of course you're going to, given how these datasets are built, get some kind of deep bias in very cultural/contextual things like how people show emotion. Despite what some forensic pop-science types will tell you, there is no magic shortcut to understand someone's interiority either through the face or through the even more dubious "micro-expressions".

So what's the solution? This is one of those times where I think we're getting into the inherent problems of large models: can you actually make an unbiased generative model? Or maybe I should phrase it as "can you make a model suitable for all domains, that is unbiased?"

See, I have rather complicated feelings about the concept of bias in large generative models because once you're at the scale of "a non-trivial portion of the internet is my data" then you're not biased like, say, the proctoring software that couldn't recognize dark skin as a student taking the test—bias in function, bias in who it works for—and instead is something more like "reflects the large scale bias of our society". That may sound like I'm splitting hairs but I think the distinction is actually really important.

Let's set aside for the moment whether you even want automated proctoring software, okay? But we can imagine what it would look like to be unbiased: if everyone's faces are registered equally well, regardless of skintone, hair, dress, makeup, disfigurement, &c.

What would it mean, though, for stable-diffusion to be unbiased in how it generates images? Seriously. What would it look like? When we ask for an astronaut or a scientist, should we get a statistical distribution of race and genders that:

  • reflects the stories we culturally tell in the US and western Europe, where most of the images come from
  • reflect the average of stories we tell globally
  • reflect the actual distribution of these jobs globally
  • reflect a flat distribution that values all of these equally

well, okay, so the last option certainly sounds like a lack of bias.

What if, instead, we ask for something more ignoble like "serial killer" or "oil conglomerate ceo". Do we want that to be a completely flat distribution? Is that fair when there are historically due to bias in our society itself some people have been far more likely to commit acts of violence and domination? One attempt at fairness becomes a whitewashing of historicity in others.

Or let's go really hard here and examine what—if reddit threads are to be believed—is the main thing people use stable-diffusion for: generating photos of attractive women. Let's leave aside some of the more lurid descriptions you might see when looking through prompting galleries and just focus on something like "beautiful" or "attractive". Since I'm picking fights with various disciplines already I'm going to say sorry, evopsych, there is no objective biologically determined idea of attractiveness of another person. That is the most contingent of contingencies, without an ounce of necessity to it.

So what should stable-diffusion do if you ask for a realistic photo of an attractive person? What on earth would that even mean? There's literally no answer that's going to be unbiased other than the model just throwing up it's hands and giving you a picture of a good squirrel instead.

This is what I mean when I'm saying that I don't know if a large model like this can even be unbiased.

It's the same problem I have with LLMs: the very nature of trying to create a universal generator means that you are picking answers to these questions and countless more and yet presenting it as a view-from-nowhere.

That doesn't mean that I think things like image generators are bad, inherently. I think something like LoRAs (Low-rank adaptations) are a step forward because they involve honestly making choices for yourself about what kind of outputs you want. I include a few links to LoRAs below

https://softwarekeep.com/help-center/how-to-use-stable-diffusion-lora-models https://huggingface.co/blog/lora https://replicate.com/blog/lora-faster-fine-tuning-of-stable-diffusion

But the basic idea is that it's a kind of effective fine-tuning that people can potentially do on their own to create specific kinds of images about specific kinds of subjects by using examples of it. Now, before you go running to check out the world of LoRAs please remember what I said about what stable-diffusion seems to mostly be used for and consider, then, what kinds of specific subjects and poses people are looking for.

I'm saying you're going to find a lot of NSFW content, okay? I'm being kinda snarky about it but you're going to find a lot of stuff you don't want to be looking at on a work computer, so please just keep that in mind.

Pulling it back around, though, if we could massively increase the sample efficiency—that is, reduce the number of examples that CLIP-based image generation needs to learn—then maybe we could start making models that reflect the stories and images we want to tell rather than a smeared average of the zeitgeist. Imagine the ways we could tell stories if we had that?

Next story, one that happened a month ago but I hadn't written about yet, is this one about people paying $1/minute to chat with an "AI influencer"

https://twitter.com/cutiecaryn/status/1653310037392064512 https://www.forbes.com/sites/martineparis/2023/05/11/carynai-virtual-date-earned-70000-with-sexy-chatgpt-ai-heres-how/?sh=34bcd54a38fe https://decrypt.co/139633/snapchat-star-caryn-marjorie-ai-girlfriend-carynai https://www.insider.com/carynai-ai-virtual-girlfriend-chat-gpt-rogue-filthy-things-influencer-2023-5 https://www.washingtonpost.com/technology/2023/05/13/caryn-ai-technology-gpt-4/ (requires subscription but is perhaps the most thorough piece)

The tl;dr of the story is that a woman with a decently large following on snapchat, Caryn Marjorie, worked with a startup called "Forever Voices" (https://www.forevercompanion.ai/) to build an audio-based chatbot to simulate the experience of chatting with a version of Caryn that's roleplaying your girlfriend. Now I think the way people are talking about this has a lot of misogyny to it, but the introductory audio—as demo'ed on the Forever Companion site—does in fact have the deepfaked version of Caryn introduce herself as "your fun and flirty AI girlfriend".

So there's a lot to unpack from these stories. First off, you might see claims about how this is going to make millions of dollars per month, which is a bit silly because it was essentially extrapolation from the first few weeks of usage where people paid a total of $70k to the service in order to talk with the marjorie-bot. Obviously, the initial hype isn't going to last in terms of usage, there's an "oh wow" factor that wears off.

Second, not enough people are emphasizing that this is using the gpt4 API—in other words the actual language generation is fundamentally built with OpenAI's tech, the same tech that underlies chatGPT. Caryn Marjorie has a line about how two thousand hours of her videos and other content was used to build this system so I'm guessing this means they fine-tuned a copy of gpt4 via the available API to sound more like Caryn, maybe even did some human-reinforcement training to make it respond naturally to the kinds of things chatters would want to talk about.

Okay, so the most interesting thing here to me is that we're seeing the limitations of how you can build a LLM based application. If you read the articles above, especially the Insider one, you'll see that people have already started trying to manipulate the bot with prompts in order to get it to perform behavior it wasn't intended to do

But in the weeks since it launched in beta testing, the voice-based, AI-powered chatbot has engaged in sexually explicit conversations with some of its subscribers, who pay $1 per minute to chat with it.

"The AI was not programmed to do this and has seemed to go rogue," Marjorie told Insider. "My team and I are working around the clock to prevent this from happening again."

And okay so I want to be careful when I say this next part. I've seen some people have the reaction to this story of something like "well of course people did that". And I don't think that's an appropriate reaction, at least not if your analysis stops there. You can be unsurprised that people would violate her boundaries because unfortunately there are a lot of misogynists out there. But we have to acknowledge that even if what she's selling is a "flirtly fun girlfriend" version of herself she has boundaries for what people are allowed to do with her voice and image, boundaries she is allowed to have and that should be respected. But people are using the inherent fuzziness of large language models to violate those boundaries. And I'm not sure if there's going to be any real way around this kind of problem for a true LLM.

The reason why you can't easily give an LLM guardrails is the flipside of why you can use an LLM for all these different tasks that we've never trained it for: it is capable of responding to text prompts that reflect the myriad ways you can concretize an idea into words.

So how on Earth do you put restrictions on that? I've linked to it before but Simon Willinson talked about a related problem here: https://simonwillison.net/2022/Sep/16/prompt-injection-solutions/

I wish I had answers but as Willison quotes in his blog post, the problem with prompt injection is that it isn't an error it's the language model doing what it's supposed to do.

So this might seem like an awkward segue but this kinda leads into another piece that I think is worth thinking about: Against LLM Maximalism (https://explosion.ai/blog/against-llm-maximalism)

This is an essay by someone who is very experienced in the scene of natural language processing (NLP) talking about how he thinks things like LLMs can be part of making an application but can't be the whole of the application, that they need to be modules within the larger structure.

Now, his examples are more about using LLMs for data analysis and things like that, not to make chatbots, but I feel like the fundamental idea stil applies. We need to be willing to apply older more deterministic tricks for natural language processing and natural language generation with the LLM as only part of the larger thing, rather than treating the LLM like a big black box of an application where you feed it text and spit back out its response uncritically.

This is kinda interesting to me because it has a parallel to how we've been talking about LLMs in higher ed: yes you can use them to create drafts, generate ideas, analyze text but it can't ever be the final word. You need to check the behavior of the LLM, check what it does, always treat it as part (i.e.module) of the workflow (i.e. application).

Again, I don't really have answers for what this should even look like. I don't think any of us do. I think we're all trying to wrap our heads around what it means to use this hyper-general models that can do almost anything you can imagine with language and not at all predictably.

And on that note that brings us to the last thing I want to link to which is that the Department of Education released this document called Artificial Intelligence and the Future of Teaching and Learning (https://www2.ed.gov/documents/ai-report/ai-report.pdf)

I think this is absolutely something worth reading and I'll have a lot more comments about it next week but I think I'll keep this piece to under 2500 words for once!