Statement From Worldcon Chair

Dear Worldcon Community,

We have received questions regarding Seattle’s use of AI tools in our vetting process for program participants. In the interest of transparency, we will explain the process of how we are using a Large Language Model (LLM). We understand that members of our community have very reasonable concerns and strong opinions about using LLMs. Please be assured that no data other than a proposed panelist’s name has been put into the LLM script that was used. Let’s repeat that point: no data other than a proposed panelist’s name has been put into the LLM script. The sole purpose of using the LLM was to streamline the online search process used for program participant vetting, and rather than being accepted uncritically, the outputs were carefully analyzed by multiple members of our team for accuracy.

We received more than 1,300 panelist applicants for Seattle Worldcon 2025. Building on the work of previous Worldcons, we chose to vet program participants before inviting them to be on our program. We communicated this intention to applicants in the instructions of our panelist interest form.

In order to enhance our process for vetting, volunteer staff also chose to test a process utilizing a script that used ChatGPT. The sole purpose of using this LLM was to automate and aggregate the usual online searches for participant vetting, which can take up to 10–30 minutes per applicant as you enter a person’s name, plus the search terms one by one. Using this script drastically shortened the search process by finding and aggregating sources to review.

Specifically, we created a query, including a requirement to provide sources, and entered no information about the applicant into the script except for their name. As generative AI can be unreliable, we built in an additional step for human review of all results with additional searches done by a human as necessary. An expert in LLMs who has been working in the field since the 1990s reviewed our process and found that privacy was protected and respected, but cautioned that, as we knew, the process might return false results.

The results were then passed back to the Program division head and track leads. Track leads who were interested in participants provided additional review of the results. Absolutely no participants were denied a place on the program based solely on the LLM search. Once again, let us reiterate that no participants were denied a place on the program based solely on the LLM search.

Using this process saved literally hundreds of hours of volunteer staff time, and we believe it resulted in more accurate vetting after the step of checking any purported negative results. We have also not utilized an LLM in any other aspect of our program or convention. If you have any questions, please get in touch with program@seattlein2025.org or chair@seattlein2025.org.

Kathy Bond
(she/hers)
Chair Seattle Worldcon 2025
chair@seattlein2025.org

78 thoughts on “Statement From Worldcon Chair”

  1. I was not aware of the use of LLMs in the vetting process when I applied to be a panelist at WorldCon, and I am unsure if I would have made such an application had I known those tools would be used for this. However, it seems what’s done is done and I am not going to throw a fit.

    I would, however, like to see the output of the LLM’s vetting process for my application. What did the machine say about me, and will I have a chance to dispute that characterization if I find it to be untrue?

    Reply
      • as do I. I’ve gone to some lengths to disavow AI in any kind of creative arena or any association between AI and my work and myself – and now this threatens to simply undo all that and brand the AI sigil on my forehead. To say that I am not exactly happy about this situation would be a gigantic understatement. The very least that should happen is that every single person affected by this gets to SEE what AI THINKS it knows about them and what sort of word salad it regurgitated.

        Reply
    • Nah, go ahead and throw a fit. These are the same LLMs that stole attending authors’ work, never mind the privacy concerns and false results!

      Reply
    • Dollars to donuts they wrote a query that just returned yes/no. If it was yes, they let them in*. It it was no, they apparently did some searching .

      Otherwise, this wouldn’t have saved any time over a batch search engine script.

      *Except for track leads who cherry picked people to review more.

      Reply
    • Considering you were unaware of the use of an LLM in the application and vetting process (and it likely would have gained massive pushback had it been announced in advance,) it seems like asking for the vetting results, and an opportunity to rebutt is more than fair. WorldCon should provide each applicant with the LLM output of their application.

      Reply
      • We need the queries they ran more than the results. The results was likely just a list of Yes/No/[and possibly]Maybe.

        Reply
  2. I’m utterly baffled by this choice. How was this more useful or reliable than simply searching via search engine? ChatGPT and similar LLMs absolutely should not be used for biographical information about people.

    Reply
  3. > Using this process saved literally hundreds of hours of volunteer staff time, and we believe it resulted in more accurate vetting after the step of checking any purported negative results.

    I believe it saved time, but out of curiosity, what gives you reason to believe it’s more accurate? ChatGPT is infamous for being far less accurate than human work. You can tell is it saved time, and I’m willing to believe it was good enough, but it feels like an insult to our intelligence to say that actually the LLM was MORE accurate.

    Reply
  4. This is a TERRIBLE idea and you should really have asked a few authors before implementing this plan. The output of LLMs is based on the work of creators, including your invited guests, which was stolen without permission, acknowledgement, or payment, and the amount of power and water used is horrific. The collation of multiple search results could have been handled with a simple script, without the use of planet-destroying plagiarism machines or the introduction of errors that required fact checking.

    I acknowledge and appreciate the use of fact checking and I will take you at your word that no one was rejected because of the use of LLMs. Nonetheless this is an extremely poor choice, with exceptionally bad optics, and will result in a LOT of bad press and hurt feelings, which could easily have been avoided.

    Reply
  5. “…Let’s repeat that point: no data other than a proposed panelist’s name has been put into the LLM script…”

    That’s in addition to all the material from these and other authors which have been misappropriated to build and train the LLMs in the first place, which seems to be OK to you all?

    Reply
  6. Did your vetting process take into account the tendency of LLMs to reinforce and increase racial disparity, amongst other biases?
    Did you consider the massive environmental impact of ChatGPT?
    How does it save time when a system that has in some studies had over 50% error rate needs human validation anyway?

    Reply
  7. A person with no professional rate credits got a panel due to this system. Meanwhile, some 4 novels and edited ten anthologies, nominated for awards, won awards, been on the bestsellers list got denied, a comic, short films, and has two large releases this year announced so far got denied.

    This is not right.

    Reply
    • I mentioned elsewhere, they say they reviewed people the LLM marked as nos, but they didn’t review all the yesses.

      The LLM supposedly didn’t 100% keep people out, but it did 100% let people in.

      Reply
      • And every person let in who didn’t earn it is another person who didn’t get a spot. As pointed out by someone else, at least one person got a panel who had no professional credits to their name whatsoever. That almost certainly meant a qualified person got denied a spot, and given how faulty LLMs are, I’d be willing to bet this happened a bunch of times.

        Just checking the chatbots “No’s” is clearly not enough when a faulty Yes means someone else gets left out by default.

        Reply
        • Yup.

          Also it likely let abusers in.

          There process was horrible. Even if LLMs were reliable and accurate (hah!), Worldcon only input names to the LLM. That isn’t enough information.

          I have an extremely uncommon name (only 3 total people in the US), and an LLM would still not judge me correctly.

          I’m not a writer, but I got a degree in math. I published a single paper 30 years ago. I am not qualified for a panel at a major math conference (or even a crappy conference). But If I applied for one, and they followed this procedure, I’d undoubtedly be approved. 1 of the other 2 guys with my name happens to have written dozens of papers in math and physics. The LLM would see Kevin [extremely uncommon last name] and give me credit for their work.

          Meanwhile, there are thousands of people with my wife’s first and last names (and her middle name is the most common in the US). Who knows what an LLM would decide about her, but it certainly wouldn’t be based on her.

          Reply
  8. “we built in an additional step for human review of all results with additional searches done by a human as necessary”

    “Absolutely no participants were denied a place on the program based solely on the LLM search. Once again, let us reiterate that no participants were denied a place on the program based solely on the LLM search.”

    ‘As necessary’ seems to mean that sometimes you believed the LLM without any double checking. So you may not have denied anyone based solely on the LLM, but you likely approved of people based solely on the LLM.

    Safety? Gone.

    Moreover, there were people denied ‘in part’ because of the LLM. That isn’t the exhoneration of the process you claim it is.

    You may as well say ‘no participants were denied a place on the program based solely on their race.’

    “Track leads who were interested in participants provided additional review of the results.”

    How did they choose who they reviewed? This sounds like the leads chose to give extra process to people they knew.

    Reply
  9. I want to point out that there is a lot of research about the bias of LLMs, and the need to mitigate those biases. In general, using an LLM for vetting people means that it is quite likely that you will disadvantage already disadvantaged folks, such as women, BIPOC and queer writers.

    This was such a bad mistake.

    Reply
  10. Thank you for all the work you’ve done for Worldcon! It can be a thankless task, and I’m sure you understand the depth of feeling in the SFF community. I appreciate the many hours of volunteer work done by you and your team.

    Reply
  11. If you can write a script to use ChatGPT, you can write a script to use a search engine. The difference in output between the two is amazingly different. A search engine returns data. LLMs synthesize data incorrectly, which is called colloquially hallucinating.

    Using an LLM irrevocably contaminates the process. Unless a human validates each and every output of the LLM, there’s nothing of worth in the LLM output.

    Please, for the sake of the reputation of the con and respect for your participants, abandon what you’ve done and start again.

    Reply
    • I agree with all your points except one. A search engine isn’t going to give them a simple yes/no, where they can just take all* the yesses without doing any work. That appears to be what they did. Even worse than reviewing bad data.

      *Well, not all. Track team members could apparently pick people for in depth review. That’s not a process rife for abuse or anything.

      Reply
  12. I am horrified at the absolute disrespect for writers this demonstrates and the unconscionable waste of resources. Disgusting.

    Reply
  13. Did it tell you if any potential program participants put glue on their pizza to keep the cheese in place?

    Look, I’ve been an overworked Worldcom staffer and I get it. But someone really should have read the room on both this process AND disclosing it in advance if you were still intent on using it. Especially for an event that is going to try to win back the trust of the fans by being transparent and then using an llm which is anything but transparent is not a super great look.

    Reply
  14. I’m confused why you characterize LLM as “more accurate”, because LLMs have no capacity to understand truth or falsity- they uncritically accept all information fed into them as factual, regardless of how true it is, and are infamous for “hallucinating” false but believable-sounding material. They’re useless when it comes to matters of factuality, and the fact that you have failed to grasp this suggests staggering levels of incompetence within your organization.

    Reply
  15. This is a very concerning statement. Did you abide by GDPR and other data protection legislation when you fed people’s personal data into this LLM?

    Reply
    • Of course not. Their statement makes it clear they don’t even understand that a full name is PII that’s subject to all sorts of data privacy concerns.

      Additionally, their statement makes it seem like they don’t understand names aren’t a unique identifier. Imagine putting “John Smith” into an LLM (with no other data, according to the statement) and expecting it to return information about a particular person.

      Reply
  16. When you realize the depth of your mistake in vetting panelists in this way and choose to scrap the current results and restart the process, I am happy to volunteer to help with vetting, as would many others, I’m sure

    Reply
    • I agree whole-heartedly with this! And I am sure others will volunteer also.
      The use of LLM for this is not appropriate. TBH, the use of LLM for any part of WorldCon is inappropriate, given that these programs trained on writers’ work without permission or even notification.

      Reply
  17. Given that LLM-generated information about people is rarely accurate, this seems like a foolish waste of time and resources even if you don’t take into account the ethical issue that the program you’re using was developed using the work of many of your paying attendees without either compensation or permission. I have a very common Anglo name, even when I include my middle name, and I would not be at all surprised to find that any LLM-generated information about me included facts about other people with the same name along with made-up information.

    As someone who does a great deal of reading about these programs and about where tech in general is going, I was hoping to be on a panel discussing “AI” at the convention. I recently was part of a panel discussing this at Minicon and I moderated such a panel at World Fantasy in 2023. In both cases, we had thoughtful discussions that included participation by those with a great deal of tech experience. I don’t think anyone on either of those panels would have wanted to use “AI” to vet panelists.

    I expected better from you.

    Nancy Jane Moore

    Reply
  18. You say that only the participant’s name was fed into the LLM. But where did the other data the LLM was using come from? How did you assess the validity and reliability of the data that was used to train the LLM?

    Reply
  19. Wow.

    Just: wow.

    Did you really, seriously think your handwaving and spinning and excusemaking would hold any water with anyone who’s thought about these matters for more than about three seconds?

    Clearly, you have no scruples, no situational or contextual awareness, and deep scorn for (real) intelligence.

    You ought to be deeply ashamed, but clearly you have none of that, either.

    Reply
  20. As a writer who is aspiring to publish one day, I am disappointed to say the least. You’ve lost some substantial legitimacy in my eyes, and I know that means nothing coming from a nobody such as myself, but this will be remembered for the future by more than just me.

    Now if you’ll excuse me, I’m off to research ways to protect my work from being scraped by LLMs.

    Reply
  21. This was a poor choice. The tool is ill suited to the task because of recurring bias issues and

    It was also a poor choice because these tools are built without writer’s consent, even going so far as to ignore robots.txt files indicating a site should not be scraped.

    Failure to be up front about this was an equally poor choice.

    Reply
  22. * Using ChatGPT assumes that all prospective program participants have enough of an online presence for it to provide any meaningful trace of them to pass on (unless online presence was one of the requirements to be offered an invite). It’s also particularly tone-deaf given the recent revelations of the rampant unauthorized use of the work of genre creators by AI tools.

    * Given that you yourself state that you had to build in an additional step for human review of all results with additional searches done by a human as necessary, this would not appear to be the time-saver you claim.

    * I thought the whole idea of having Program track leads was that they were familiar with particular aspects of the genre so not sure why you could/did not use them to divide up the work of vetting program participants.

    Reply
    • They mention no one was –denied– solely by the tool, but they don’t say no one was –approved– solely by the tool.

      That suggests that their query returned something like Yes/No/Maybe.

      They reviewed all Maybes and Noes, but they don’t say they reviewed Yesses. That’s where they could save time. (By approving people without appropriate review).

      I suspect the only yesses that were reviewed were when track leads picked individuals for more screening. No process is given for that.

      Reply
  23. Utterly unserious, shameful, and cheap. And alarming that nobody in a leadership position thought through just how bad and distasteful it is to do this, on so many levels; that nobody saw the countless artists and authors and editors around the world fighting tooth and nail against the adoption and legitimisation of these tools and thought, we should not use these tools, we should find another way, we should stand in solidarity.

    Reply
  24. Just a question, it seems you only reviewed denials. Is a false positive in this case not a much, much bigger deal than a false negative?

    Reply
  25. So you used the bs hallucination machine to save time and in return you get significant reputational damage and permanently lost trust.

    And now you will also probably need to spend a large amount of time fighting fires and trying to salvage what you can…
    And processing information and GDPR requests from people…
    And handling people who are now dropping out, etc. etc. etc….

    Was it worth it?

    Reply
  26. Wow.

    Just…wow.

    As a con whose theme is looking to the future, perhaps you should have done that before using an LLM??

    ChatGPT’s impact on Earth is not insignificant. Not to mention it was built on the labor and works of almost (if not) all of your panelists, including me. It’s also incredibly racist, ableist, etc. in its process as well. You think you saved time but the amount of time you’re gonna take making up for such a colossal mistake means you didn’t save jack.

    Instead, you just ticked off most of your guests/attendees and pros. Great job there.

    Reply
  27. Also, I’m a panelist and this was NOT disclosed to me. You did not have my permission to feed anything of mine, including my name, into ChatGPT or any other LLM. By these comments, no other panelists saw any notification about it either…

    Reply
  28. Read the fucking room! Nobody cares HOW you used the world destroying plagiarism slop machine, we care THAT YOU THOUGHT IT WAS OKAY TO USE IT AT ALL

    Reply
  29. > “Once again, let us reiterate that no participants were denied a place on the program based solely on the LLM search. …we believe it resulted in more accurate vetting after the step of checking any purported negative results.”

    So — were people *approved* based only on the LLM search? Because that seems just as bad.

    Reply
  30. So instead of running the LLM process against a group of test cases, or in parallel to the process it’s supposed to replace, you just… replaced the usual searches with this new LLM based approach? ‘Human review’ notwithstanding, that doesn’t sound like a test to me.

    The expert you consulted indicated that the LLM might provide false results, but what steps were actually taken to counteract this? How was human review of the results conducted? Were *all* LLM results human reviewed or only some? If only some, how was it determined which results would be reviewed by humans and which would not?

    Has there been, or is there any intention to do, any kind of after-action analysis or audit to assess how the LLM performed and whether it fairly screened prospective panelists or was unfairly biased in any way at least?
    Will the results of this analysis be released publicly at any point?

    Reply
  31. Other people have cogently addressed the ethical and environmental issues with using an LLM this way. In addition to those problems, it’s simply a bad process.

    Program applicants should be evaluated by human beings with a variety of backgrounds and experiences who can consider things like “Has this person been an engaging panelist at other conventions I’ve been to” and “Do we already have people on the program who are experts in this area” and “Would this person improve the diversity of our participant cohort.” People know what people want to see at conventions, and what makes someone a compelling speaker and a well-behaved panel participant. Algorithms don’t.

    Also, I’m astounded by the claim that it takes 10 to 30 minutes to do the most basic vetting of an applicant. What on earth are you doing that takes that long? As long as the members of your program team are in fandom and reasonably knowledgeable, easily several hundred of those applicants will be recognized by name. For the rest, verifying someone’s ISFDB entry or professional website takes seconds, and checking SF/F news and gossip sites to make sure they’re not a notorious bad actor takes maybe another minute. You claim that using an LLM saved hundreds of hours, but then describe a process of evaluating the output that sounds incredibly time-consuming. What’s the point of asking an LLM anything when none of its results can be trusted and it all has to be vetted by people? Why not just… ask people?

    In a recent interview on Ars Technica, Adam Becker said, “A large language model is never going to do a job that a human does as well as they could do it, but that doesn’t mean that they’re never going to replace humans, because, of course, decisions about whether or not to replace a human with a machine aren’t based on the actual performance of the human or the machine. They’re based on what the people making those decisions believe to be true about those humans and those machines. So they are already taking people’s jobs, not because they can do them as well as the people can, but because the executive class is in the grip of a mass delusion about them.” I’m deeply disheartened to learn that you think so poorly of your volunteers—or are doing such a poor job of selecting volunteers—that this cumbersome, unethical process seemed like a better bet.

    Reply
    • Well said. In addition to the absolutely terrible optics and the ethical and environmental issues inextricably baked into this — all of which should have been dealbreakers here — with the process and oversight described, it really doesn’t seem like it saved time (unless the human process going on was wildly inefficient — 10 to 30 minutes for preliminary vetting, really??). I also have trouble seeing how it could possibly have improved accuracy, rather than simply providing an additional vector for biases and inaccuracies, and ones that are notoriously difficult to catch, at that. And this, after all the infamous bias and transparency issues with the previous Worldcon — just, really??

      I understand how, with busy volunteers and a large pool of potential panelists, this could have seemed like an appealing way to save time. But I fail to understand how it could have gone forward to be actually used. What a deeply disappointing and unnecessary choice.

      Reply
  32. Let’s see if I understand this correctly, shall we?

    After a Worldcon in China where people were left off the awards ballots for what seemed like pretty arbitrary reasons… the Worldcon in my town, which I eagerly signed up for, is using ChapGPT to vet panelists… yes?

    1) you understand that there are lawsuits regarding the theft of intellectual property in the creation of ChatGPT, yes? That’s rather more than “concerns and strong opinions”.

    2) you understand that there a real environmental cost to using these tools, yes? I seem to recall the encouraging of public transit, yes? Well, perhaps you’ve offset that savings with this.

    3) the process you describe seems … rather complicated. Yes, I’m sure you were concerned about the potential panelists, but it almost seems as if the process was so complicated that it offset any savings you were hoping for. Was it really saving you time at that point? And your own expert doesn’t sound optimistic.

    What’s especially disheartening is that due to circumstances beyond our control involving US politics, this Worldcon is already likely to have low international attendance.
    And this is unlikely to help.

    Reply
  33. OK, you used AI. You did not explain or clarify what criteria was used to determine who gets to be a program participant or not. Was it based on panel topic? Was it based on past accomplishments? Was it used like keyword filtering?

    Reply
  34. Well, if you’re willing to throw your convention to piratical hallucinating software that’s becoming more inaccurate by the day then expect a storm straight into the sails of your ship. If you want to go down in infamy, folks, you’ve done the trick.

    Fortunately for me I don’t do big cons and events (despite returning to performing live) so I don’t need to be loud in rejecting the glorious disaster you’re fomenting.

    You *could* have requested more volunteer help — honestly, there’s lots like me who love to dig into information. Or you could have hired a traditional vetting firm and possibly paid less than you did for the hallucination engines.

    Hell, hire Steve Perry. He’s a trained investigator.

    You’re also wasting time and money on this while a much bigger issue is storming your way — the problems the Mango Mussolini administration is causing for visitors to the United States of Maggots. Either you need to come up with procedures to help people come in unscathed and survive their stays without being kidnapped and shoved in an ICE gulag or you need to create policy to inform WorldCon members overseas (and those here on visas and green cards) that they should stay far away from the WorldCon.

    Reply
  35. LLM web searches steal copyrighted content from online creators such as myself. Websites depend on people to visit them to generate revenue. These searches meant your volunteer team was benefiting from the knowledge of online writers without helping them stay afloat. It is terrifying that you thought this was okay given that your entire purpose is serving writers.

    You claim that this saved you hundreds of hours, but also that all of the LLM output was checked by humans before decisions were made. These two things cannot both be true. At some point you must have used this very likely inaccurate search data as part of the decision making process without actually verifying everything it said. Your description of your process is very vague, which raises many questions. At this point, a more transparent description of your vetting process, step by step, is called for.

    I don’t want to ask you for a refund if I don’t have to, but I cannot support an organization that thinks it’s okay to steal the work of myself and others for their convenience.

    Reply
  36. yeah. no. that’s not an apology. all llms have to train on data *before* you enter anything into it. it’s the biggest aspect of the program that makes it an llm. there is no such thing as a llm with just names in it. that’s a database. when you purchased the llm, you supported the theft of copyrighted works from authors. the. end.

    no world con for me this year.

    Reply
  37. Aside from all the concerns others have raised, which are real and serious, I read this: “The sole purpose of using this LLM was to automate and aggregate the usual online searches for participant vetting, which can take up to 10–30 minutes per applicant as you enter a person’s name, plus the search terms one by one.” and said “but that’s what you use python for”.

    Because THAT IS WHAT YOU USE PYTHON FOR. Exactly this kind of “I don’t want to have to do a bunch of tedious work, I just need to see the results, and also, I need the results to be real and not made up by hallucinating sand.”

    There are a lot of good computer tools for making these kinds of web searches easy and fast and handing a human the results in a readable format. This isn’t something you need a python genius for, either — if there isn’t a 90% working version of code that does almost exactly this on stack overflow, I’d be very surprised.

    Reply
  38. This makes me feel a lot worse about being accepted as a panelist, honestly. How do I know if that was right? Am I taking a spot from someone more qualified?

    Reply
  39. And what about the hallucinated positive results? What about the results that should have included negative information but simply didn’t? How many panelists were not accepted because one that shouldn’t have been was? It’s clear not every result was given the same level of scrutiny. If that’s not unfair I don’t know what is.

    Reply
  40. Hey this seems to be going great! What an excellent Worldcon Chair you are! Everybody loves ChatGPT, right? This is the exact best move if your goal is to permanently destroy Worldcon.

    Reply
  41. Lazy, lazier, laziest.

    Worldcon organizing has always been labor-intensive. Gail Barton (my wife) and I ran the Art Show at Denvention II in 1981. We had some 300 artists to manage, with no computer, just a typewriter and the usual fannish repro equipment of the day. We started with a set of artist mailing lists gotten from several earlier Worldcons, plus an invaluable list given to us by Bjo Trimble (an old friend of Gail’s).

    Gail and I corresponded with at least two thirds of the artists who came to the show; over a hundred of them were walk-ins who did not pre-register for our show. We encouraged walk-ins via the convention publications; Gail’s career as a fan artist began as a walk-in artist at St. Louis Con in 1969. She did all the management and policy setting, while I did all the recordkeeping and correspondence. After the con, we sat down with the convention treasurer to finish things up—I needed the treasurer to check my figures (he was a pro accountant) and to write the checks to the artists.

    If my memory serves me well, the only computer that Denvention II had was an Apple II belonging to someone on the concom; it was used for registration recordkeeping. The programming organizers got along just fine without one.

    This was a 4000-person convention, meaning it was in the same general size range as most modern Worldcons. Except for registration records, we did everything manually.

    Since that con, I developed a good IT career, retiring as a database administrator. I have little use for AI; I consider it to be a crutch made of rotten wood with a thin coating of varnish to conceal its weakness. It will collapse at the worst possible moment, letting you fall splat onto the floor.

    Note: I also posted this comment on File 770.

    Reply
  42. LLMs are not search engines, you stupid, stupid, people.

    They hallucinate nonsense constantly and have been shown to have inherent biases (like racism, homophobia etc).

    Congratulations, you have publicly embarrassed yourselves.

    Reply
  43. As an author whose work has been stolen to make LLMs like the one used, this is not a remotely sensible statement. There are numerous issues about using ChatGPT specifically and certain other LLMs in general for Worldcon, which is meant to be a celebration of SFF creativity. Not least those issues include theft of our intellectual property, and the devastation wrought to our planet and society by its use. These tools are also well-known for generating biased returns that are racist, misogynist, ableist and anti-LGBTQIA. Privacy is on the list but relatively low down because of the enormity of the issues already mentioned.

    I withdrew from consideration for the in-person programme a few weeks ago because I decided to not risk going to the USA. My considerable sympathies with the team organising this Worldcon are evaporating because of this statement about using ChapGPT to vet programme participants. I have gone from feeling sad about not attending and taking part to genuine relief.

    Unless things dramatically change*, I would be urging no one to take part on the programme because to do so would be an implicit endorsement of intellectual property theft, environmental destruction and every other dreadful thing associated with the likes of OpenAI.

    *Suggestions:
    Remove all the results based on the use of LLMs/ChatGPT from the programming selection process and redo what is needed. Yes, it will be a lot of work but consider a Worldcon with no programme at all as the alternative, or a programme dominated by unethical tech enthusiasts.
    Estimate the environmental cost of the usage and donate a greater figure to a reputable charity to mitigate the destruction. Too many people think that using ChatGPT and similar tools is cost free, when it is not. Note that I have not included the costs associated with intellectual property theft, but perhaps a sizeable donation could be made to the various class actions being taken against unethical tech developers like OpenAI, Meta, etc.

    Reply
  44. “We understand that members of our community have very reasonable concerns and strong opinions about using LLMs.”

    Maybe if people have what you admit are “very reasonable concerns” you shouldn’t do/have done it.

    Reply
  45. Cue inevitable boycott of programming participation by almost every creative likely to draw…attendance at the convention.

    This statement is just ridiculous. It’s tone-deaf and myopic, written by someone (I’m guessing a group of someones) attempting to be politic and instead demonstrating only the most reprehensible sort of politics.

    Reverse this decision. Start again. Maybe you can save this Worldcon. Probably not, but maybe.

    Reply
  46. Out of curiosity, which AI connected company is sponsoring WorldCon this year? Because this is giving big NaNoWriMo vibes.

    Incredibly disappointing to see an organization pretend to care about authors and then support the plagiarism of their work. This needs to be walked back immediately.

    Reply
  47. Regarding a person’s full name being private data, applicants are applying for the opportunity to have their name and photo and biography published publicly. Any manual or scripted research with search engines would utilize the same name. So the privacy aspect of this debate is moot.

    Reply
  48. I second SJ Groenewegen’s suggestions above. Seriously. If you were using LLMs from the start, throw the whole panel process out & start over. Call for human volunteers (including someone to write a program accessing an actual search engine API) so that it can be done in a week.

    The results are *poisoned*. Free yourself from the sunk cost fallacy, do it NOW.

    Reply

Leave a Reply to Michelle Cancel reply