Sidebar by Courthouse News

Electric Sheep

Courthouse News Season 5 Episode 6

The future is here.

Sixty years ago, the science fiction writer Philip K. Dick wondered whether androids dream and what about. As artificial intelligence moves from the realm of sci-fi into daily reality, helping companies and governments analyze data and make decisions, the questions of what mechanisms motivate AI and whether these programs can overcome human limitations remain unanswered.

Many tech leaders seem to believe we are on the cusp of having self-aware AI with intelligence that surpasses humans. Even if we don’t get there, we’re already facing places where current laws don’t really protect us.

Join us in our sixth episode this season for a tour through a not-so-post-apocalyptic landscape as companies and experts try to navigate how humans bring AI more and more to life. 

Special guests:

This episode was produced by Kirk McDaniel. Intro music by The Dead Pens.

Editorial staff is Ryan Abbott, Sean Duffy and Jamie Ross.

(Intro music)

Hillel Aron: What do androids dream about? Is AI going to replace your job? What happens if a computer program learns to write its own code? Welcome to Sidebar, a podcast by Courthouse News. I'm your all too human host, Hillel Aron, here today with Amanda Pampuro, who may or may not be a cyborg. 

Amanda Pampuro: I selected all the boxes with traffic lights, so I must be human. Thankfully, reporters are still mostly people, but the kind of reality where you can assume content comes from people is changing. Even science fiction magazines have trouble sorting through all of the AI-generated stories people submit trying to make a buck. 

HA: Robots writing about robots. 

AP: Yeah, but the twist ending of, ‘Surprise, I'm an AI,’ gets old. Many tech leaders seem to think we're on the cusp of having self-aware AI with intelligence that surpasses humans. I'd argue even if we don't get there, we're already facing places where current laws don't really protect us. 

HA: All right, where do we start? 

AP: With a deep breath. 

Tyler Johnston: If you haven't yet tried a very low latency voice mode, it is different from how you might expect talking to an AI to feel like a Siri or something in a couple ways. The first is that it's not just doing kind of typical text-to-speech, where the AI model is thinking in words on a screen and then it's just being processed into a robotic voice. It's doing direct voice-to-voice token prediction. There's all sorts of weird emergent properties that this has led to. For example, if you ask it to count really quickly from 1 to 100.

AI: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.

TJ: And you ask it even quicker, it'll like take breaths every like 10 seconds.

AI: 25, 26, 27, 28, 29, 30. 

TJ: You'll hear it breathe in. 

AI: 31, 32, 33, 34, 35.

TJ: And if you ask it like stop breathing, ‘You're an AI, you don't have to do that, just say the words quickly.’ It'll come back with like, ‘No, I have to breathe. Like how do you expect me to get these words out?’ Obviously, it does not in fact have to breathe, but it's just this wild emergent property from the fact that it's been trained on so many millions of hours of audio content from humans. 

AP: This is Tyler Johnston, founder of the Midas Project. 

TJ: We're a small watchdog organization that is primarily evaluating AI companies developing frontier models so the largest and most powerful models today with an eye toward their risk management practices and the safety guardrails that they're putting in place. The reason this is so important to me is because the laws have not actually caught up, and so right now we are in a regime of self-regulation, and it's incumbent on the companies to actually determine amongst themselves what are the most important safeguards that we're going to have in place. How are we going to do dangerous capability evaluations on our models? How are we going to prevent people from misusing our models? 

AP: Assuming tech companies continue to foot high energy bills, Tyler says today's AI is the simplest it will ever be. He saw the leap between ChatGPT 2, which could barely make paragraphs, and ChatGPT 4.5, which passed math tests and bar exams, and he grew concerned about future iterations. 

TJ: What worries me most right now are risks around misuse and risks around long-term model alignment. So, for misuse, we're already getting to a point where AI models are capable of generating code that far exceeds anything I could ever do as a hobbyist. There's a lot of bad actors in the world, and one of the things that prevents them from conducting large-scale, malicious cyber-attacks is the fact that there's a serious amount of technical expertise required and for a lot of people who you know get advanced degrees in computer science or work at tech giants for decades, they're just not interested in large-scale cyberterrorism and in fact, that sort of barrier to entry is going to fall. 

AP: Even if humans don't misuse the technology, you still have the problem of aligning AI's goals with our goals. A commonly cited example is if you ask an AI model to operate trains in a way that ensures there are no crashes, the program will just park the trains and shut down the tracks. 

HA: Smart. 

TJ: The caveat for alignment is how would we know if we were positively reinforcing when they say, ‘I'm sorry, I cannot help you create a cyber-attack,’ because they actually do not want to do that versus, privately, they're totally capable of it and they're willing to do it under certain circumstances, but they know that we will disapprove of it and we will kind of administer the negative reinforcement that not only is something they're trying to avoid, but it's something that fundamentally changes who they are. You know, their minds are being warped a little bit with every bit of positive and negative reinforcement and if they don't have it now, which I don't think they do, I think pretty soon they're going to have situational awareness where they understand during the training process what is happening to them and they will start to be incentivized to just act in sycophantic ways where they're saying what they think we want to hear, as a way of kind of artificially creating the positive reinforcement without actually having their beliefs changed in the way that we want. 

HA: Can that happen? Can a program really just make its own decisions? 

AP: That's exactly what they're being programmed to do. To train AI, programmers give it huge sets of data and then rate the output. If you ask for a sandwich recipe and it suggests two cups of chicken broth, then you give the output a thumbs down. 

HA: And if it suggests roast beef, you give it a thumbs up. 

AP: But if you keep doing that, the program might think humans want roast beef on everything and give you recipes for beefy ice cream or French dip candles. 

HA: Sounds good to me. 

AP: I might have some problems with that. Nevertheless, I've been really interested in alignment and this question of who gets to pick what the AI values like, whether their principles will guide them toward pro-meat or vegan options, or if they can actually mediate peace between polar opposites. Assuming today's AI is in its toddler stage, I want to know what we are teaching it now that'll stick around for the teenage years. 

HA: What are programmers afraid of? 

AP: Burning dogs. 

HA: Is that a joke? 

Stephen Thaler: DABUS does that too. It generates its own sense of right and wrong, but it has to be corrected. So, I remember one notable event that the press has picked up on. DABUS suggested to attract attention all one needs to do is light a dog on fire, and being a dog lover myself, I said, ‘No, I can't do that.’ So, I was able to excise that entire thought stream, that chain, from the system, and it no longer proposed setting dogs on fire. 

AP: This is... 

ST: Dr. Stephen Thaler and I am the founder and chief scientist at a company in St. Louis, Missouri, known as Imagination Engines, and we have been working in the area of generative AI now for in excess of 30 years. 

AP: Dr. Thaler created DABUS, an artificial neural network that he says can think and create. He formally patented the ‘Device for the Autonomous Bootstrapping of Unified Sentience’ in 2015. But in a sense, he spent his entire life developing DABUS, since having a near death experience as a child. 

ST: And I began researching what happens when neural networks are destroyed from the inside out, to model what happens when brain cells die in the brain, and what they do is produce virtual experience, experiences that you may have had at first, and then the ideas become more novel. 

AP: Dr. Thaler's early creativity machines rewrote Christmas carols and created products he patented, like a toothbrush with crisscross bristles. In recent years, he's risen to fame for filing patents and copyrights crediting DABUS with its own creations. In March, the D.C. Circuit Court of Appeals rejected DABUS’ copyright for the picture titled A Recent Entrance to Paradise that it created in 2012. In response to increasing questions, the U.S. Copyright Office released guidelines this year severely limiting the copyrightability of works generated by or with AI. For Dr. Thaler, protecting DABUS’ intellectual property is a precursor to sharing more inventions with the world and could protect the paradigm from future interference. And you decide that a burning dog is something that we need to nip in the bud and not allow to develop. I'm curious if one day we will have laws guiding which ideas will be curbed or not. 

ST: Well, that scares me, but that's how democracy works, right? 

AP: Why does that scare you? 

ST: I think of DABUS as a child, and I think that would be a form of coercion. So, if we get the right people involved, they can say, ‘Hey, this is not right.’ You know, DABUS needs to be free of human influence because out of respect for another species and that's what it is, it's a whole new species and I would claim it probably has human sentience in it. DABUS is both the threat and the promise and if we carefully watch it, it can grow up to be a responsible citizen in the world. If not, I don't know. It's my child. I have mixed emotions about it. 

AP: What legal structures or laws should we be thinking about with that possibility in our lifetimes? 

ST: I hate to be so negative, but I don't think there are, because if you can build these systems, namely DABUS systems, you are sitting on top of the world. It's going to be the profit incentive that governs. So, you're going to invent behind the scenes, whether it be new product, services or new ways of overcoming civilization. It's going to happen. There's no safeguard against it. You can do background checks, I guess, for people who want to buy computers. It's about the only thing I could imagine. But that gets a little bit ridiculous. 

AP: Can you describe how it feels or your thoughts when you see DABUS create something that you have never seen before? 

ST: Elation and frustration, because few can share in that elation, because they weren't there when the whole concept was invented and this was vindication, my legacy essentially being justified. On the other hand, it's like frustration, ‘How do I get the word out?’ I mean, even some of the robotic accomplishments were not early publicized by the funders in the military. It's like, ‘Look at me, I can fly,’ but nobody's looking when I'm flying. 

HA: So, Dr. Thaler thinks that DABUS is alive. 

AP: He argues that DABUS’ brain is not functionally different from a human brain, but it is different in terms of its evolutionary history and experiences. 

HA: And is that the same for ChatGPT? 

AP: Think of them as different animals. DABUS is built from a series of artificial neural networks, each containing interconnected nodes that represent neurons in the brain. Through self-devised consequence chains, it can create its own associations and new ideas. Large language models, or LLMs, are pattern recognition algorithms. They are trained on massive datasets to predict and generate content based on human feedback. I spoke with one linguist about why LLMs say what they say and how that differs from why humans say what they say. Ellie Pavlick, an assistant professor of computer science and linguistics at Brown University, popped in her AirPods to chat with me on the early morning commuter train. She's trying to understand why LLMs produce what they produce. Even if it feels like you're talking to a human, when ChatGPT says it's vibing with you, you're not. Take the phrase, ‘I don't know.’ 

Ellie Pavlick: If a human says, ‘I don't know,’ we take it to refer to a really specific cognitive state of not knowing. But if the language model says, ‘I don't know,’ it might actually have nothing to do with that. But it might just be that the phrase, ‘I don't know’ is highly likely given the other words around it, and so the model is kind of mimicking what would a human say when talking about this topic. 

AP: LLMs predict the next word in a sentence in response to prompts. Given enough words or enough sentences, patterns and biases emerge. 

EP: The model will encode associations and biases, right. So, this is the kind of thing where if you start a sentence with the man, it might be more likely to say ‘drove,’ and if you say ‘the woman,’ it might be more likely to say like ‘cried.’ These associations are what allow it to know that it's more likely that if you're eating, then you're probably more likely to be eating watermelon than a shoe, and that if you're at a picnic in the summertime, you're more likely to be eating watermelon than soup and it can learn these really powerful associations that make it seem to understand a lot about the world, as a result of getting very good at predicting the next word. 

AP: LLMs generate all of these random associations we take for granted as part of learning to predict what word comes next. Here it is again, the alignment problem. How do you train AI to make the associations you want? 

EP: So, if you decide that you want outputs that are always funny, you know like you just tell it that the funny outputs are better than the serious ones, and then it will always be kind of funny. If you tell it you always want it to produce outputs with bullet points, this is really common thing for language models, they seem to love bullet points and, from the model's point of view, like using bullet points or adding emojis are of the same kind of thing as like avoiding racial slurs or producing stuff that's factually correct. 

AP: The emojis an LLM uses, the jokes it makes, reflect what the human trainer's value. I pushed Pavlick on the problem of picking which values to align an AI with and she said we're still at the point where we're trying to figure out how to give them values, never mind which ones. 

EP: You know, researchers and companies producing these models talk a lot about things like values, rules and constitutions and laws that are going to be baked into the model. What the models really are doing is they're being trained to do more things, to do stuff that looks more like the stuff they've been told is good and less like the stuff that they've been told is bad. 

HA: So, more emojis, less racism. 

AP: In concept. 

EP: So, we can't really encode hard rules into them. One of the go-to examples is like don't be racist, right. This sounds like a good value to want to encode into a model and it has a lot of issues. So one is that there's not a hard definition of racist right. Humans struggle with this. 

AP: Say you give the model every racist word and news story you can find. 

EP: If you give it a lot of examples, it can actually be pretty powerful. It might actually have a really good, robust way of avoiding the bad behavior, because you've given it so many examples that it's pretty,, it's been exposed to all the possible ways this bad behavior could manifest. 

AP: Even if a model gives you seemingly accurate, articulate answers to the questions you're asking, Professor Pavlick said that doesn't mean it understands what it's doing any more than we do. With the low understanding of internal workings, is it fair to say we're like Louis Pasteur before he invented the microscope? We are on the edge of something, and we don't have the tools yet to understand it. 

EP: Absolutely, absolutely, yeah, I've used that analogy often as this kind of more like a natural science where we've stumbled upon an interesting thing and now we need to invent a lot of new tools and a lot of new theories and a lot of new science in order to actually explain it and describe it. It's hard to legislate what's hard to understand. 

HA: Let's get into that a bit more after a break. 

Destiny DeVooght: Hi, my name is Destiny DeVooght and I'm a reporter for Courthouse News based in Milwaukee, Wisconsin. I cover all sorts of political, legal and cultural news in the Dairy State and I recently published a story about the Wisconsin Supreme Court that I think you might want to check out. I sat down with the Honorable Chief Justice Ann Walsh Bradley in her chambers to look back at her career before she retires in July. As a former English teacher, she is a stickler about grammar, even from the bench, and she never fails to make the gallery laugh. Thirty years ago, she became the first woman elected to the state Supreme Court. Today she's the longest serving justice in our history. It's stories like this that make me so proud to cover Wisconsin, where I was born and raised. If you want to read about our conversation or any of my coverage on the happenings in Wisconsin, you can check me out at courthousenews.com. You can also follow me on X @desdev5. From the whole team at CNS, thanks for listening to Sidebar. 

HA: So, we have increasingly capable artificial intelligence in more places. What's wrong with that? 

AP: For starters...

Matthew Tokson: In terms of the present day and the near term, there are problems that people are really concerned about with just algorithmic decision-making that's very rigid, that sort of fossilizes existing societal issues and brings them forward and sort of spits them out over and over again. My name is Matthew Tokson, I'm a professor of law at the University of Utah and I write in artificial intelligence and the Fourth Amendment and all things law and technology. 

AP: With two co-authors he met on the platform formerly known as Twitter, Professor Tokson outlined the myriad of questions the law is being asked to answer. 

MT: The more capable it is, the more it can do, and the more it's sort of asked to do, the more it's integrated with infrastructure, like if an AI is running military infrastructure, obviously that raises some obvious concerns. But if it's integrated in things like water treatment facilities, electrical grids, malfunctions, errors with, you know, alignment problems, these are all going to have greater and greater real-world consequences. 

AP: Do you have an idea of what the best existing law is that protects people today from some of the anticipated harms of AI? 

MT: Honestly, I think the thing that's protecting people today from AI harms to the extent that anything is protecting people today from AI harms, it's probably like Federal Trade Commission rules, what we might think of as like Federal Trade Commission common law, where it's like the FTC says what you're doing with your AI. There are examples of this where the FTC would say, ‘Your use of this AI or your, you know, the way you're interacting with consumers here is unfair, it's harming consumers,’ and they'll shut people down that way if what they're doing is truly and sort of obviously harmful to consumers. 

AP: Through Operation AI Comply, the FTC is cracking down on companies making false claims about AI, like Ascend Ecom, which had falsely claimed to generate passive income for users and DoNotPay, which had claimed to be the world's first robot lawyer but did not in fact generate perfect legal documents. The FTC is using existing law to stop companies from incentivizing commercial surveillance, enabling fraud and impersonation, and perpetuating illegal discrimination. 

HA: That's great because they shut down the robocalls, right? Wait, hold on, I'm getting a call. 

AP: Oddly enough, our robocall episode from 2023 featured AI as a problem and a solution, and it's not far-fetched to say, regulating spam calls prepared the FTC for its latest tech battle. 

MT: You know, I see people on every side of every AI debate being extremely certain that like it's never going to amount to anything, or that it's going to completely revolutionize the world, or it's going to, you know, kill us all or it's going to save us all. And I think the only path of wisdom here is to say like we don't know, and we should sort of prepare accordingly, dealing with that uncertainty and you know we've dealt with uncertain futures before with new technologies, you know, nuclear power and to some extent, you know, just burning coal for energy generation. 

HA: Speaking of energy, does it really take a lot of power to run AI? I know the data centers take up only about 1.5% of power globally. 

AP: And that's growing by about 15% every year per the International Energy Association. The Department of Energy estimates AI will take up 12% of the U.S.'s energy budget by 2028. 

HA: That's a lot of power just to write some podcast scripts. 

AP: Ha, yeah, funny. I did use a transcript generator, but AI did not write this script. I also asked ChatGPT about its energy consumption. 

ChatGPT: Such a good and rarely asked question. For large language models like me, estimates vary, but here's a rough ballpark. Per query, energy use is about 0.01 to 1 kilowatt hours, depending on model size. Bigger models is more energy. Length of response, back-end infrastructure, GPU usage efficiency, etc. 

AP: I checked those numbers against the International Energy Association and found they were a bit inflated, but still comparable to...

ChatGPT: Run an LED light bulb for a few hours, power a laptop for a couple hours, charge a smartphone a few times. So, not massive per use, but it adds up when scaled to billions of queries. But training the model, that's the big one. GPT4 used thousands of GPUs over weeks, months, likely consuming millions of kilowatt hours. 

AP: The World Economic Forum reports training GPT3 took 1,300 megawatt hours, about the energy 130 homes consume over the course of a year. Naturally, I also asked ChatGPT about the implications of advanced AI for the law. 

ChatGPT: That's a super rich and important angle. Looking at AI alignment through the lens of law opens up a whole new set of challenges and responsibilities. Legal systems are our most formal way of encoding collective human values, so they're tightly intertwined with alignment issues. Here are some key implications and tension points between AI alignment and law. Laws are a type of alignment attempt. Legal systems are, in a sense, human alignment infrastructure. They formalize rules about behavior, intent, fairness, harm and accountability. So, when we ask how do we align AI with human values, one natural path is align AI with the law. But here's the kicker: laws are complex, ambiguous and evolving.  AI systems often lack the capacity to interpret context or intent, which the legal system heavily relies on. Laws reflect compromises in politics, not pure ethics, so they're not always aligned with what's morally right either. 

HA: What else did the robot tell you? 

AP: I asked about the choices we have facing advanced AI, and it reflected my worries like a black mirror. 

ChatGPT: The illusion of choice. In public discourse, it sounds like we get to decide how AI is developed. Democratize AI. Let's have a conversation about ethics, but most laypeople don't build AI, don't write the algorithms, don't shape the training data, don't sit on regulatory bodies, don't get access to high-level models, can't even really audit the systems affecting them. So, while the conversation may be open, the power is not. Who does have the real choices? Big tech companies, OpenAI, Google, Anthropic, Meta, elite researchers and engineers, policymakers and national security experts, venture capitalists and institutional investors, a few vocal academics and ethicists. These actors have the tools, the access, the funding, the influence. They decide what gets built, who gets to use it, what values are embedded, what's safe and what's dangerous. You can opt in to using ChatGPT or not, sure, but you can't opt out of the larger societal effects of AI. 

HA: All that content, it's derived from copyright materials, right? 

AP: Yeah, it is a summary of all the data fed into it, with the output trained to give me the kinds of responses I want. Several federal courts are presiding over cases asking whether developers need to credit or compensate the authors and journalists whose copyrighted works were used to train the LLMs without their consent. 

HA: That's good. If the courts back these protections for intellectual property, does that protect us from the AI takeover? 

AP: Nope. Tyler at the Midas Project has the first steps in that direction. 

TJ: I think the set of commitments that I'm most optimistic about so far go by the name of frontier safety policies or responsible scaling policies, and these are a set of commitments that were modeled somewhat after a biosecurity levels and among like labs that work with dangerous pathogens, where you have these tiers of risk. For biosecurity levels are based on the risk that would be posed by a dangerous pathogen that you're handling. You have a certain level of security mitigations that you have to have in place to make sure that no mistakes happen that could lead to a new pandemic or something. 

AP: So, you limit access to potentially dangerous programs to the people who are working on them and vet those people who have access to them. This seems like a good place to start because it protects company intellectual property and protects everyone else from potentially risky programs. 

TJ: It's definitely a compromise between the economic constraints that these companies have and the kind of best available step that they could take to try to prevent the risk. 

AP: So, does everybody have this commitment on their website and you're confident they're going to follow through? 

TJ: I am not at all confident they'll follow through. Most of the major AI developers have released commitments of this sort, but the quality of them varies wildly. None of them I am particularly excited about. So, The Midas Project released a scorecard actually in February. We called it the Seoul Tracker. It's at seoul-tracker.org and what this was about was that in 2024, there was an AI safety summit in Seoul, South Korea, and there a handful of companies made promises to the government of Korea and the United Kingdom that they were going to implement the style of policy and that it was going to have all of these qualities that were going to convince the governments and the public that it was going to be a strong policy that could be trusted. When that deadline actually rolled around in February of this year, not all of the companies had released it and still to this day, not all the companies have followed through on the promise and those that did, there's often glaring holes where they've missed some critical component of the policy that they had promised to implement. 

HA: So, we have some ideas about how to prevent the AI takeover, but companies are just too busy with stuff. 

AP: Yeah, like have you tried that AI that can make a 3D image of your pet? 

HA: No, I haven't gotten there yet. Have you tried the AI lawyer? 

AP: No, but the judges who have seen them do not seem impressed. 

AI lawyer: May it please the court, I come here today, a humble proceeding before a panel of five distinguished justices. 

Judge: Is this, hold on, is that counsel for the case? 

Party: I generated that. 

Judge: I'm sorry? 

Party: I generated that. That is not a real person. 

Judge: Okay, it would have been nice to know that when you made your application. Okay, I don't appreciate being misled. So, either you are suffering from an ailment that prevents you from being able to articulate, or you don't. 

HA: Ouch. Is that in New York?

AP: In the New York Supreme Court Appellate Division. 

HA: So, not all these experiments in AI are going swimmingly. 

AP: At the same time, some AI co-pilot companies have found a market. 

Scott Stevenson: The original product we sold we experimented with selling it direct to businesses where they could like get a simple agreement out without going to a lawyer at all. You know kind of like LegalZoom sort of thing. And those products, yeah they're people are not willing to pay very much money for them and even if you can automate you know, your defense or automate the production of a contract, the person is really going to understand it or really trust what the machine has output. 

AP: This is Scott Stevenson, co-founder of Spellbook, an AI co-pilot tailored for commercial law, with features to protect client privilege data. 

SS: A lot of the hallucination problems come from when you're just using a raw AI model and you're relying on what's been baked into its memory already. So, even if you take one of these models and fine-tune it for law, you still have the challenge of you're trying to get it to recall from memory, you know, exact case details and things like that and you're saying, ‘Hey, you know from your memory, you know, tell me the details of this case.’ And you're forcing it to spit out details. And it's going to spit something out and it's often not going to be that accurate, and instead the much better approach if you want accurate results, in my opinion, is to do something similar to RAG, which is retrieval augmented generation. So, teach the AI models to go and look up the information and find the information that you're looking for from a reliable data source and then have it come back with citations, citing where it got that information, so that you're actually able to double check and make sure that that information is real. 

AP: Scott told me this actually makes law librarian jobs more important than ever. 

SS: The curated data set has a massive, massive impact on the kind of performance that you're going to get. You know, a lot of lawyers talk about their golden precedents that they like to refer to and collecting a set of golden precedents. I think these sorts of curation problems are where a lot of the human judgment actually lies today. 

AP: Scott estimates Spellbook can save a firm of 10 to 20 people maybe 450 hours of work over three months, taking the place of an intern or a first-year attorney. 

SS: The reason I came into this as an engineering product person is because I had a small business and one day I got a legal bill that took like half the cash out of our bank account, and I was very frustrated at that moment, like why, what makes legal services so expensive? And then I talked to Daniel, my co-founder, and he was also frustrated from practicing, you know, how much time he was spending on just like absolute drudgery of copying and pasting between Word documents until midnight. One of the missions that drive me is like helping make eliminating drudgery for lawyers but also helping making services more accessible. Like 70% of legal matters don't make it to lawyers at all, and so there's this huge latent demand for things that just never get to lawyers. And this lawyer who I talked to, who's making 30% more last year than he did the year before, he said he's able to take on so many of these smaller matters that otherwise wouldn't have made sense for him to take on. 

HA: The future of law isn't less work, it's more. 

AP: They call it increased productivity. But yeah, you can probably bet on that. With this pressure to speed up and produce more, maybe we'll also value more handmade goods. 

HA: I'll see what my editor thinks about the slow journalism movement. 

AP: Or like how people still collect vinyl, and some writers prefer pen on paper. There are some experiences AI cannot compete with. 

Ulysses Secrest: I am Ulysses Secrest, known on Instagram and TikTok as Aerarius Metalworks. 

AP: I met Ulysses through a mutual friend, the Instagram algorithm, which suggested I watch a video of a mask. It starts in the shape of a bird beak, then opens up like the Dilophosaurus in Jurassic Park. When I clicked the video, I noticed a disclaimer: no AI used. 

US: The reason that I put the disclaimer there is because I want to state very clearly that I have not taken any shortcuts in designing this process and developing my work. I've been working on these pieces for seven, eight years now, in one form or another. 

AP: Fresh out of high school, Ulysses wanted to make sunglasses and quickly ran into a problem. 

US: I did not have the technical ability or know-how to make the hinges, so I just wung it and I tried out a bunch of different ways of making the hinges using sheets to layer with holes punched through it and then soldered on the ends to make like a kind of layered, stacked hinge like that, which didn't work very well because none of my edges were clean. It cut into itself, it ground as it bent and at one point, I tried making more of like a streamlined set of glasses that like followed the curve of my eye socket. I hated how it looked, so I was like, I've spent like an hour on this, I'm just going to go all in and I added a nose and I added around my mouth and the whole thing and I had a wireframe of my face and I thought it looked awesome. 

AP: His struggle to build sunglasses turned into a near decade long venture into mask making. But not just that. He invented various face power mechanisms. Some masks have an intricate counterweight system and open vertically like a telescopic mohawk. He also has a ring that springs up like a scorpion when you bend your finger. 

US: Every step of the process is an experimentation, and also for every step of the process where the design changes, there's at least one piece that something didn't work. My hard line with the usage of AI is creative endeavors, but it can just be summarized down to there needs to be a level of the soul in a piece, like if you use AI to make a piece in its entirety. The decisions made weren't made with a human mind involved. They were made with an algorithm that is guessing based on the decisions that humans have made previously. 

AP: Ulysses said there are no shortcuts to his work. Since each mask is custom built, the only way to expand is to bring more people to him. He's actually remodeling a portion of his studio into an apartment for people to stay in while their masks are being made. 

US: You know rash generalization time. A lot of the people who want to use AI to make art as a shortcut, who haven't put the work in, who haven't sketched shitty little eyeballs on the corners of their homework. They need to do that. They need to be bad at something for quite a while before they're going to be good at it, and it sucks. But that is what making is, and that is a very important thing about being human. Like if you are somebody who has gone from crypto to AI, just take a break, unplug, do some sketchy little drawings in a park. Enjoy your time. Do something that makes you happy for no other reason. Like, I started making these masks because I wanted to make some sunglasses, because I thought it would be cool and make me happy. At the time, did it think it would lead me to making stuff for Lady Gaga? Nope. 

AP: Did you ever make the sunglasses? 

US: Oh yeah, I've made so many sunglasses now, and good ones too, let me think. Actually, I'll show you my safety glasses, because I made those from scratch. 

AP: These are not clunky shop class blinders. Ulysses places a fine set of steampunk glasses over his eyes and transforms into a Jules Verne character. 

US: So, these are my safety glasses, and these dark lenses are actually removable because these are shade five welding lenses. I use these for the casting because the torch I use is so bright and underneath the clear lenses are actually sapphire crystal watch faces. 

AP: Wow. 

US: Because I was sick and tired of my safety glasses getting scratched up. So, now the only thing that can scratch these is literal diamond. 

AP: Here's an idea AI would be hard-pressed to generate because it's never seen the world through scratched or dirty glass. 

HA: But I could still get AI to design glasses for my face and have them 3D printed. 

AP: If you value that novelty, I'm sure Google will hook you up. With all these possibilities and unknown, I keep thinking back to the 1960 version of H.G. Wells' Time Machine. Have you seen it? 

HA: No. 

AP: There's a scene where the inventor explains how his time machine works to a room full of dapper Victorian men. He opens this elaborate box, he brings out a little replica of his time machine and then he tells them, ‘Yeah, you push this lever forward and you pull it back to move backwards,’ and they're all like, ‘Brilliant.’ But I'm like, ‘Wait, wait, wait, I don't understand. I see the button but not the wires. I don't know if this is a good thing or a bad thing, or who's supposed to control it and how to tell if it's working.’ 

HA: Well, I'm sure everything will end up being fine. 

AP: We'll find out either way. For now, I do know this is the end of the episode.

HA: That it is, but thank you for an excellent episode, Amanda, and thank you to the listeners. Whether you be human or robot or anything in between. If you liked this episode, please rate and review us on Spotify, Apple Podcasts and YouTube, and follow us on Twitter if you want. Next time on Sidebar, Los Angeles is facing a billion-dollar budget shortfall. One reason for that is lawsuits. LA pays out hundreds of millions of dollars in legal settlements and judgments every year. We're going to talk about a small but significant subsection of those lawsuits: broken sidewalks. Nearly half of all sidewalks in LA are crumbled, cracked or have these giant tree roots pushing them up. That's a big problem for a city that's too broke to fix them and too broke to pay tens of millions of dollars to people who accidentally trip over them. 

(Outro music)