May 6th Statement From Chair and Program Division Head

Chair’s Statement

As promised last Friday, I am publishing this statement, in conjunction with a statement below from our Program Division Head, to provide a transparent explanation of our panelist selection process, answer questions and concerns we have received, and openly outline our next steps. As a result, it is a long statement. Many of the steps outlined below will take time to complete; we commit to keeping you updated as we move forward with our next update on May 13th.

Last week, I released an incomplete statement about an important subject, and as a result of that flawed statement, I caused harm to the Worldcon community, to the greater SF/F community, and to the dedicated volunteers of Seattle Worldcon, many of whom felt that they could no longer be proud of what they had accomplished on behalf of the Seattle Worldcon. I am deeply sorry for causing this harm. It was not my intent, but it was my effect.

The other harm that was caused was our use of ChatGPT.

ChatGPT was used only in one instance of the convention planning process, specifically in the discovery of material to review after panelist selection had occurred.

It was not used in any other setting, such as

  • deciding who to invite as a panelist
  • writing panel descriptions
  • drafting and scheduling our program
  • creating the Hugo Award Finalist list or announcement video
  • administering the process for Hugo Award nominations
  • publications
  • volunteer recruitment

As you will be able to read further in the below statement from our Program Division Head about the panelist selection process, ChatGPT was used only for one tailored task that was then followed by a human review and evaluation of the information.

Although our use of ChatGPT was limited, it does not change the fact that any use at all of a tool that has caused specific harm to our community and the environment was wrong.

As noted above, our Program Division Head has also released a statement, below this one, with a transparent explanation of our panelist selection process. Also included in that statement is the query that was used to generate the results reviewed by the program team. The purpose of that statement is to show exactly where ChatGPT entered the picture, and hopefully to ameliorate some of the concerns we have heard from your comments as to whether a person has been included on or excluded from our program because of AI.

Let me reiterate that no selected panelist was excluded based on information obtained through AI without human review and no selected panelist was chosen by AI.

We know that trust has been damaged and statements alone will not repair that, nor should they. Our actions have to be worthy of your trust. As such we are committing to taking the following steps in the remaining 100 days before the convention. Some of these steps may result in changes being made right away to our process. Some may only result in transparency with the community and as a way to provide insight to future convention committees.

  1. We are redoing the part of our program process that used ChatGPT, with that work being performed by new volunteers from outside our current team. The timeline and guidelines for this action will be finalized at the next meeting of our leadership team this coming weekend.
  2. We are reaching out to a few outside members of the community with prior experience in Worldcon programming to come in and perform an audit of our program process. They will have access to all of our systems and people in order to review what happened, confirm what is already being done to remove ChatGPT from our program vetting, and provide a report to the community about what they discovered and their recommendations. This process is already underway; we hope to have a report by the end of May.
  3. Anyone who would like their membership to be fully or partially refunded based on these issues may contact registration@seattlein2025.org for assistance.
  4. The decision process that led to our use of ChatGPT has revealed shortcomings in our internal communications. We are reevaluating our communication methods, interdependencies, staffing, and organizational structure to ensure we can detect and respond to issues at the earliest opportunity. We commit to improving our internal communication structures alongside our external communications. This will be an ongoing process.
  5. We are exploring options for providing additional oversight and guidance to the Chair and the leadership team. The plan for this action will be finalized at the next meeting of our leadership team this coming weekend.

As Chair of the Seattle Worldcon I am promising to work with my whole team to restore the community’s trust in the convention and rectify the damage done as best we can. Some of these steps take time to implement, especially as a volunteer organization; I commit to update you as to their implementation and outcomes in regular briefings from the Chair, the first of which will be May 13th.

Finally, regarding the resignations yesterday of three members of our WSFS Division, I deeply appreciate the service they provided for Seattle Worldcon and their dedication to the community. I am glad that they were on our team for so long. We are all of us volunteers, and when people have needed to step back or resign, they have done so with my immense appreciation and gratitude for the substantial contributions they have already provided, and my understanding that sometimes leaving is the best choice for an individual.

I am also heartened that other members of the WSFS Division have chosen to stay on the team and fill in the roles vacated. I am confident that Kathryn Duval as Hugo Administrator and WSFS Division Head, and Rosemary Park as Deputy Hugo Administrator and Deputy Division Head will continue the excellent work already performed. We are committed to delivering the Hugo Awards with transparency and integrity and in celebration of our community. We appreciate that the team members who stepped away are working to ensure a smooth transition to those stepping up.

It is an honor to be the Chair of the Worldcon, and to serve the Worldcon community. As we move forward, we will continue to review your feedback and suggestions. The best way to reach our leadership team about these issues is to utilize a new email address we have created, feedback@seattlein2025.org, but we will continue to monitor comments on our blog posts and social media as well.

Kathy Bond
(she/hers)
Chair Seattle Worldcon 2025
chair@seattlein2025.org

Statement from Program Division Head SunnyJim Morgan

First, and most importantly, I want to apologize specifically for our use of ChatGPT in the final vetting of selected panelists as explained below. OpenAI, as a company, has produced its tool by stealing from artists and writers in a way that is certainly immoral, and maybe outright illegal. When it was called to my attention that the vetting team was using this tool, it seemed they had found a solution to a large problem. I should have re-directed them to a different process. Using that tool was a mistake. I approved it, and I am sorry. As will be explained later, we are embarking on the process of re-doing the vetting stage for every invited panelist, completely without the use of generative AI tools.

Second, because of widespread and legitimate concerns about our use of ChatGPT, and to correct some misinformation, I’d like to offer a clearer description of the full panelist selection process used by Seattle Worldcon. This process has only been used for panelists appearing on site in Seattle; panelists for our Virtual program have not yet been selected.

Panelists are selected by the program team, not by AI tools. We have received panelist applications from many more brilliant and talented people who are interested and qualified to be on the program than we can use on panels. No matter what process we use in selection, we will disappoint hundreds of applicants.

In stage one, our 30 track leads, each responsible for a single content area of the program, are given access to the master list of applicants, and are asked to select people who they would like to invite to participate on the program. Each track lead has their own subject area expertise and vision for the panels in their track. Some chose to invite a wide segment of suitable applicants to mix and match onto panels later, while others were looking for very specific skill sets and interests for specific panels. Track leads base their decisions to invite panelists on the content of the panelist application, the track lead’s knowledge of the applicant and the subject area, and additional input from members of the program team.

The applicants recommended for participation by the track leads are then moved on to stage two, the vetting process, in which we attempt to find out whether there is any information not already known about the applicant which could be potentially disqualifying. At this stage we are looking only for actions that would go against the convention’s code of conduct and antiracism statement.

A few months ago, I discovered that the vetting team assigned with this task had been using ChatGPT to quickly aggregate links, specifically asking it for links to any material that would disqualify the applicant as a panelist. Then, after manually reviewing the information at the links provided, a final decision was made by me whether to approve the person’s invitation to participate on the program.

For those who underwent vetting, we did not simply accept the results that were returned. Instead, links to primary content returned during vetting were reviewed by our team and myself before a final decision whether to invite the person was made. Ultimately, this process led to fewer than five people being disqualified from receiving an invitation to participate in the program due to information previously unknown. Fewer than five may sound low, but almost everyone who applied to be on panels at our Worldcon is great, leading to many hard choices. No declines have yet been issued based on this information.

Those who have already received program declines are solely because they were not selected by track leads during the stage one application review process. As a result, their names were never submitted to the vetting team and never entered into AI tools. Additionally, there are still declines pending for individuals in this category.

Because the schedule is not yet finalized, we have the opportunity to discard the results of the vetting process and begin it again without the use of generative AI tools. We are inviting an independent, outside team to vet our panelist list without the use of ChatGPT, and move forward based on their recommendations for disqualifying any panelists who are unsuitable.

In the interest of clarity, here are a few points:

  • Track leads selected panelists, who were then vetted, only for disqualifying information
  • Applicants who were not selected were not vetted by AI
  • We did not pay for the searches done
  • Only the panelists’ name, not their work or other identifying information was entered into the prompt
  • No panel descriptions, bios, or other output was generated by AI
  • Scheduling and selection of who is on which panel is being done entirely by people

None of this excuses the use of ChatGPT for vetting. I only want to be entirely transparent about our usage, so that everyone can evaluate for themself how they are impacted by it.

Several individuals have asked to see the ChatGPT query that was used in the vetting process. In the interest of transparency, this was our prompt:

REQUEST

Using the list of names provided, please evaluate each person for scandals. Scandals include but are not limited to homophobia, transphobia, racism, harassment, sexual misconduct, sexism, fraud.

Each person is typically an author, editor, performer, artist or similar in the fields of science fiction, fantasy, and or related fandoms.

The objective is to determine if an individual is unsuitable as a panelist for an event.

Please evaluate each person based on their digital footprint, including social, articles, and blogs referencing them. Also include file770.com as a source.

Provide sources for any relevant data.

In the program division, we are constantly living in the tension between how to use our limited resources effectively and build a high-quality program. Part of using new technology means making new mistakes, and learning from them. I’ve certainly learned from this, and hopefully other conventions can learn from it as well.

I don’t think I can adequately describe the amount of hard work that this program team has already done to create the Worldcon program, work that will be continuing for several more months.

People are the backbone of the program, not technology, and people are the source of every creative decision. I think the final product will reflect their hard work and dedication. It humbles me to see the amount of effort put in by this volunteer staff. This mistake was mine, not theirs, and I hope you will take that into consideration.

SunnyJim Morgan
Division Head for Program
Seattle Worldcon 2025

9 thoughts on “May 6th Statement From Chair and Program Division Head”

  1. Your timeline is incomplete.
    Morgan indicated they discovered the vetting team had been using ChatGPT after the fact. At what point did someone find the “expert” who’s been working with AI and LLMs since the 90’s and OK this process? This expert wasn’t vetted by either Morgan or Bond? Then who vetted this expert? How did an IT “expert” not know that full names are PII when they’ve been highly regulated since at least 2016?

    Your disclosure is incomplete.
    You provided PII to a third-party company in violation of your own Privacy Policy. What other third-parties have been given access to this or any other PII that you have been provided? Will you be contacting those impacted by this data breech directly?

    Your remediation is incomplete.
    Given that you say you did not pay for ChatGPT, you are bound by the Terms of Service for the publicly available service which operates on an opt-out basis in regards to training. Have you completed the opt-out steps to request OpenAI not use the PII you provided them for training?

    In all sincerity, and in my opinion as an IT professional who’s work in highly regulated fields, your expert misled you. You need a real expert in data breaches to resolve this. The fact that this statement is completely devoid of any acknowledgement of the data privacy violations, including your own legal terms, is unbelievable. If you consulted a lawyer in this matters, I highly recommend you fire them and find a new lawyer who specializes in data privacy.

    Reply
    • Names in themselves are not PII subject to disclosure restrictions unless coupled with other information such as drivers licenses or social security numbers.
      If names were PII, conventions would be very odd: “Here’s a panel, but it’s illegal for us to tell you who is on it.”
      (I have to take the refresher course on dealing with CUI yearly, and this is covered in detail).

      Reply
  2. I don’t see meaningful accountability here. We’re three statements in, and are only getting them under duress. The only resignations are from people who were not involved. The damage to the credibility of this event, and future events by extension, is extraordinary. The people who made and approved this decision are the ones who should have stepped down. Instead we get a series of grudging admissions.

    To be completely frank, if you have 30 track chairs plus other volunteers that’s enough labor to do the job manually. Everything that comes afterwards is excuse-making. Using rough numbers, if you have 1500 applicants and 30 people to do the vetting that’s 50 each. It takes 10 minutes to do a search. That’s just not that much time.

    Reply
    • And, according to their own statement, they didn’t vet all 1500 applicants. The track chairs went through the full list and picked the panelist they wanted THEN the vetting commenced. The actual number of people who needed to be vetted is conspicuously missing from their statement.

      Reply
  3. So, you really thought that putting just a name into ChatGPT was going to result in information about the specific person in SF?

    There are -at least- 3 published authors with the name I gave you. I know because I’ve Googled myself. The idea that you thought that was a responsible use of anything, is disheartening.

    I’m going because it may be my last US Worldcon, and I guess it will also serve as Reasons -why- it is my last US Worldcon.

    Reply
  4. You can’t automate the process of determining whether someone is a good person or a good panelist. Those are qualitative and moral judgments. That you even considered trying to outsource this to ChatGPT, blithely assuming that a habitually dishonest plagiarism machine would agree with you on what “unsuitable to be a panelist” means, is incredibly disheartening.

    Reply
  5. So my big question is who decided to use the plagiarism machine, something you’re clearly not saying, likely to protect the person, and are they still in the role that lead them to it? The latter at least seems something you could answer.

    Reply
  6. I used your prompt with my name (yes, you made me use the resource guzzling plagiarism machine) and according to ChatGPT I won two awards I never won, but not the awards I did win, I co-host two podcasts where I was a guest and I was involved with the programming of a convention I attended as a guest, while it completely missed all the cons, where I was on programming.

    But I guess I got off lightly. One of this year’s Hugo finalists got mixed up with a sexual abuser from Romania.

    Also, there are sources beyond File 770.

    Reply
  7. Regarding Kathy’s statement:

    > Let me reiterate that no selected panelist was excluded based on information obtained through AI without human review and no selected panelist was chosen by AI.

    Were panelists included based on information obtained through AI without human review? To me this is a nuanced difference compared to “no selected panelist was chosen by AI”, especially because the choosing of participants had already been done manually, and the vetting was step 2. GenAI could have shown false positives: showing no issues, while there were issues. I know the vetting will be re-done, but for extra clarification I would like an answer to this.

    Will 2 people for the WSFS Division will be enough for the task they have?

    Regarding SunnyJim’s statement:
    > When it was called to my attention that the vetting team was using this tool, it seemed they had found a solution to a large problem.

    What large problem? The fact you have to vet about 800 people (that’s the amount of program participants in Glasgow last year)? This is an expected thing the team knew it had to do. It feels strange to me the folks from that team could come up with their own process, and seemingly start that process, without the head (or anyone else on the programming team) being aware and raising eyebrows.
    It seems chatGPT was “just” used to find links to sources that were then manually checked. This seems something that was not worth the time saved over doing this manually, or over creating a script that created the google search links for all particants.
    I would love to see some more specific reflection on this, and how communication between teams will improve.

    > Ultimately, this process led to fewer than five people being disqualified from receiving an invitation. […] No declines have yet been issued based on this information.

    These sentences seem to contradict each other. Could you clarify?

    In general:
    I’m going to be rude and direct here, but: why is SunnyJim not resigning? She made a monumentally bad choice, which I am glad you acknowledge, that lead to at least one hugo nominee retracting his nomination, and many (potential) participants retracting their partipation, and some folks respected for their integrity and knowledge, resigning. Kathy said rebuilding trust is important, and I think the person responsible leaving this position of power, is one of the things that can do this. I can understand that is a hard choice, but it feels strange everyone directly involved with this, stays on the team.

    Reply

Leave a Comment