First and foremost, as chair of the Seattle Worldcon, I sincerely apologize for the use of ChatGPT in our program vetting process. Additionally, I regret releasing a statement that did not address the concerns of our community. My initial statement on the use of AI tools in program vetting was incomplete, flawed, and missed the most crucial points. I acknowledge my mistake and am truly sorry for the harm it caused.
There is much more that needs to be done to address this harm, but it will take some time to develop a comprehensive response and fuller apology over the weekend. We will release a response by Tuesday of next week that provides a transparent explanation of the process that was used, answers more of the questions and concerns we have received, and openly outlines our next steps.
Thanks,
Kathy Bond
I hope you’ll address how, as GenAI is trained on racist and misogynistic datasets, bias is likely to appear in selections, even those only comprised of names. What are you and your org doing to avoid that? (This is only about the selection, setting aside the theft, water useage, and environmental damage caused by GenAI.)
LOL. This is untrue. Go look at Anna’s Archive and you can see the 50m+ books they’re typically trained on. Are they all racist and misogynist? I suspect you think so, but that’s silly.
Step 1. RESIGN.
Step 2. Anyone who thought this was a good idea, ALSO RESIGN.
Step 3. Now Worldcon can continue.
Relax, Francis.
So everybody who received a panelist rejection letter will now receive an acceptance letter?
Thank you for taking these concerns to heart. I am withholding judgement until I have a better understanding of how and why the AI system was used.
Very classy. I don’t think you needed to apologize. No need to wallow in the past. With human review- your process was just fine. Thank you for caring about what folks think, even when they are wrong and you are right.
The only response of value you could provide involves the promise to never use AI tools in this process again, followed by a step by step plan for how you will be rolling back the damage already done by AI, including a full detail of what information was fed into this tool.
As ChatGPT uses all input to “train” its own tools, the least your community deserves is to learn what information and property of theirs has been freely given to a plagiarism machine.
If you need to start the vetting process from scratch, let me know. I volunteer to help if you need me.
Nice to see someone trying to be helpful instead of throwing stones.
Same here, I doubt I can help but happy to if I can. Hope the staff is getting support through this. The internet can be awful.
Several suggestions:
1. Don’t resign.
2. Don’t waste any extra effort on framing another apology. A simple “I’m sorry, we’re all sorry, and we won’t do it again. Ever.” will do.
3. Instead of making any more lame excuses, throw out all the results from ChatGPT.
4. Start over from scratch using 100% manual methods, which you should have done in the first place.
5. Vetting should be 100% fact-based. For authors and artists, searches of ISFDB should suffice. For science program participants, curriculum vitae should do. If it isn’t fact-based, it’s subjective. Betting, rather than vetting.
6. Worries about potential CoC violators can be covered by requiring all participants to agree to your Code of Conduct.
Past conventions have gotten along just fine using manual methods. This includes Worldcons.
Thanks for listening to the community and keeping us apprised of how this shakes out.
I’m in the “Why grovel? You get it, just move forward and fix it” camp, fwiw. Everyone makes mistakes, sometimes really smelly ones, and i couldn’t handle what you’re doing, so i’m absolutely grateful you’re taking on this event, even if i would have made different choices. Please don’t resign.
I look forward to seeing WorldCon 2025’s selection process develop in a way that fits more in line with the utopian SF future we’re all hoping for.
Three cheers to you. Ok, maybe not 3 cheers but you need something amid all the hate. You’re a volunteer trying to get a challenging task done. You decided to see if modern tools could help. Was it the right choice? Who knows — definitely not many of the commenters on the prior thread and other threads. They still think ChatGPT was trained on pirated works (It was, but they stopped, and the current one you used was not, and the issue of whether current training from public data on the web is infringement or fair use remains a very much undecided one, but people act like it’s a settled question.) They ignore the fact that you were fully aware of the strange ways these tools hallucinate, and that you developed a process to mitigate that. They forget that you and your team are volunteers trying to do a job, and that the other ways its been done are also fraught with error, and that if they don’t agree with the approach you took, they should just post a nice note saying they recommend against it, but they are not the one volunteering. It’s not like you had humans evaluate nominees for whether they are critical of the CCP, but people are treating it like that. (That was worthy of the blowback, for it was malicious. What you did was in good faith even if people disagree.)
So just because people online love to get upset and spew hate, don’t imagine the whole world is that way. Or even all of fandom.
There’s several things to challenge there, but let’s start with the whole thing about the company behind ChatGPT no longer using stolen works. That had to be *forced* on them, and they are, as of this year, pushing for Trump to weaken the copyright laws so that they can continue to feed copyrighted data to the program/whatever the proper term is so that they can go back to not paying for things. Something that they are legally allowed to use because of legal finagling doesn’t make it right; it’s just legalized stealing from those who no longer have protection.
With that sort of expansionist greed behind it, and the general attitude that the program fosters, it’s very understandable for most people to be rather hostile toward it, particularly without warning that it would be used. From all the comments in the previous announcement, it was quite clear that there were no notifications that ChatGPT was going to be used as a tool, nor any update to those that had made an application when the decision was made. That lack of transparency only fed the shock and anger that the announcement itself sparked.
As for the expert that was supposed to mitigate the tool hallucinating, it’s hard to be happy about any expert for a tool and a company that is as problematic as OpenAI and ChatGPT has become. Someone that knows all those things, more often than not, is also a pusher for how useful and reliable the program itself is. I’d want more information before saying that was necessarily a good thing.
Last of all, people have been saying that ‘oh, but they’re just volunteers.’ And to an extent, that works, but they are volunteers for a convention that has been very reliably attracting the same kind of crowd, with the same kind of attendance, for a very long time now. Not only does that mean that they would know their audience (and how much AI and ChatGPT is loathed in many circles within it) but they would have had access to the names (and likely contact information) of various staff and con-runners before them to ask for advice. They knew what they were getting into, they had far more resources than most of us looking in from the outside, and while yes, they are volunteering out of a love for fandom, ‘just volunteers’ are still expected to perform and act at a certain standard.
Please also note that using personal data of attendees without consent in this way was potentially in breach of EU GDPR legislation, and take appropriate precautions to avoid this causing liability for Worldcon.
Thank you for recognizing the error and making the apology.