May 6th Statement From Chair and Program Division Head

Chair’s Statement

As promised last Friday, I am publishing this statement, in conjunction with a statement below from our Program Division Head, to provide a transparent explanation of our panelist selection process, answer questions and concerns we have received, and openly outline our next steps. As a result, it is a long statement. Many of the steps outlined below will take time to complete; we commit to keeping you updated as we move forward with our next update on May 13th.

Last week, I released an incomplete statement about an important subject, and as a result of that flawed statement, I caused harm to the Worldcon community, to the greater SF/F community, and to the dedicated volunteers of Seattle Worldcon, many of whom felt that they could no longer be proud of what they had accomplished on behalf of the Seattle Worldcon. I am deeply sorry for causing this harm. It was not my intent, but it was my effect.

The other harm that was caused was our use of ChatGPT.

ChatGPT was used only in one instance of the convention planning process, specifically in the discovery of material to review after panelist selection had occurred.

It was not used in any other setting, such as

  • deciding who to invite as a panelist
  • writing panel descriptions
  • drafting and scheduling our program
  • creating the Hugo Award Finalist list or announcement video
  • administering the process for Hugo Award nominations
  • publications
  • volunteer recruitment

As you will be able to read further in the below statement from our Program Division Head about the panelist selection process, ChatGPT was used only for one tailored task that was then followed by a human review and evaluation of the information.

Although our use of ChatGPT was limited, it does not change the fact that any use at all of a tool that has caused specific harm to our community and the environment was wrong.

As noted above, our Program Division Head has also released a statement, below this one, with a transparent explanation of our panelist selection process. Also included in that statement is the query that was used to generate the results reviewed by the program team. The purpose of that statement is to show exactly where ChatGPT entered the picture, and hopefully to ameliorate some of the concerns we have heard from your comments as to whether a person has been included on or excluded from our program because of AI.

Let me reiterate that no selected panelist was excluded based on information obtained through AI without human review and no selected panelist was chosen by AI.

We know that trust has been damaged and statements alone will not repair that, nor should they. Our actions have to be worthy of your trust. As such we are committing to taking the following steps in the remaining 100 days before the convention. Some of these steps may result in changes being made right away to our process. Some may only result in transparency with the community and as a way to provide insight to future convention committees.

  1. We are redoing the part of our program process that used ChatGPT, with that work being performed by new volunteers from outside our current team. The timeline and guidelines for this action will be finalized at the next meeting of our leadership team this coming weekend.
  2. We are reaching out to a few outside members of the community with prior experience in Worldcon programming to come in and perform an audit of our program process. They will have access to all of our systems and people in order to review what happened, confirm what is already being done to remove ChatGPT from our program vetting, and provide a report to the community about what they discovered and their recommendations. This process is already underway; we hope to have a report by the end of May.
  3. Anyone who would like their membership to be fully or partially refunded based on these issues may contact registration@seattlein2025.org for assistance.
  4. The decision process that led to our use of ChatGPT has revealed shortcomings in our internal communications. We are reevaluating our communication methods, interdependencies, staffing, and organizational structure to ensure we can detect and respond to issues at the earliest opportunity. We commit to improving our internal communication structures alongside our external communications. This will be an ongoing process.
  5. We are exploring options for providing additional oversight and guidance to the Chair and the leadership team. The plan for this action will be finalized at the next meeting of our leadership team this coming weekend.

As Chair of the Seattle Worldcon I am promising to work with my whole team to restore the community’s trust in the convention and rectify the damage done as best we can. Some of these steps take time to implement, especially as a volunteer organization; I commit to update you as to their implementation and outcomes in regular briefings from the Chair, the first of which will be May 13th.

Finally, regarding the resignations yesterday of three members of our WSFS Division, I deeply appreciate the service they provided for Seattle Worldcon and their dedication to the community. I am glad that they were on our team for so long. We are all of us volunteers, and when people have needed to step back or resign, they have done so with my immense appreciation and gratitude for the substantial contributions they have already provided, and my understanding that sometimes leaving is the best choice for an individual.

I am also heartened that other members of the WSFS Division have chosen to stay on the team and fill in the roles vacated. I am confident that Kathryn Duval as Hugo Administrator and WSFS Division Head, and Rosemary Park as Deputy Hugo Administrator and Deputy Division Head will continue the excellent work already performed. We are committed to delivering the Hugo Awards with transparency and integrity and in celebration of our community. We appreciate that the team members who stepped away are working to ensure a smooth transition to those stepping up.

It is an honor to be the Chair of the Worldcon, and to serve the Worldcon community. As we move forward, we will continue to review your feedback and suggestions. The best way to reach our leadership team about these issues is to utilize a new email address we have created, feedback@seattlein2025.org, but we will continue to monitor comments on our blog posts and social media as well.

Kathy Bond
(she/hers)
Chair Seattle Worldcon 2025
chair@seattlein2025.org

Statement from Program Division Head SunnyJim Morgan

First, and most importantly, I want to apologize specifically for our use of ChatGPT in the final vetting of selected panelists as explained below. OpenAI, as a company, has produced its tool by stealing from artists and writers in a way that is certainly immoral, and maybe outright illegal. When it was called to my attention that the vetting team was using this tool, it seemed they had found a solution to a large problem. I should have re-directed them to a different process. Using that tool was a mistake. I approved it, and I am sorry. As will be explained later, we are embarking on the process of re-doing the vetting stage for every invited panelist, completely without the use of generative AI tools.

Second, because of widespread and legitimate concerns about our use of ChatGPT, and to correct some misinformation, I’d like to offer a clearer description of the full panelist selection process used by Seattle Worldcon. This process has only been used for panelists appearing on site in Seattle; panelists for our Virtual program have not yet been selected.

Panelists are selected by the program team, not by AI tools. We have received panelist applications from many more brilliant and talented people who are interested and qualified to be on the program than we can use on panels. No matter what process we use in selection, we will disappoint hundreds of applicants.

In stage one, our 30 track leads, each responsible for a single content area of the program, are given access to the master list of applicants, and are asked to select people who they would like to invite to participate on the program. Each track lead has their own subject area expertise and vision for the panels in their track. Some chose to invite a wide segment of suitable applicants to mix and match onto panels later, while others were looking for very specific skill sets and interests for specific panels. Track leads base their decisions to invite panelists on the content of the panelist application, the track lead’s knowledge of the applicant and the subject area, and additional input from members of the program team.

The applicants recommended for participation by the track leads are then moved on to stage two, the vetting process, in which we attempt to find out whether there is any information not already known about the applicant which could be potentially disqualifying. At this stage we are looking only for actions that would go against the convention’s code of conduct and antiracism statement.

A few months ago, I discovered that the vetting team assigned with this task had been using ChatGPT to quickly aggregate links, specifically asking it for links to any material that would disqualify the applicant as a panelist. Then, after manually reviewing the information at the links provided, a final decision was made by me whether to approve the person’s invitation to participate on the program.

For those who underwent vetting, we did not simply accept the results that were returned. Instead, links to primary content returned during vetting were reviewed by our team and myself before a final decision whether to invite the person was made. Ultimately, this process led to fewer than five people being disqualified from receiving an invitation to participate in the program due to information previously unknown. Fewer than five may sound low, but almost everyone who applied to be on panels at our Worldcon is great, leading to many hard choices. No declines have yet been issued based on this information.

Those who have already received program declines are solely because they were not selected by track leads during the stage one application review process. As a result, their names were never submitted to the vetting team and never entered into AI tools. Additionally, there are still declines pending for individuals in this category.

Because the schedule is not yet finalized, we have the opportunity to discard the results of the vetting process and begin it again without the use of generative AI tools. We are inviting an independent, outside team to vet our panelist list without the use of ChatGPT, and move forward based on their recommendations for disqualifying any panelists who are unsuitable.

In the interest of clarity, here are a few points:

  • Track leads selected panelists, who were then vetted, only for disqualifying information
  • Applicants who were not selected were not vetted by AI
  • We did not pay for the searches done
  • Only the panelists’ name, not their work or other identifying information was entered into the prompt
  • No panel descriptions, bios, or other output was generated by AI
  • Scheduling and selection of who is on which panel is being done entirely by people

None of this excuses the use of ChatGPT for vetting. I only want to be entirely transparent about our usage, so that everyone can evaluate for themself how they are impacted by it.

Several individuals have asked to see the ChatGPT query that was used in the vetting process. In the interest of transparency, this was our prompt:

REQUEST

Using the list of names provided, please evaluate each person for scandals. Scandals include but are not limited to homophobia, transphobia, racism, harassment, sexual misconduct, sexism, fraud.

Each person is typically an author, editor, performer, artist or similar in the fields of science fiction, fantasy, and or related fandoms.

The objective is to determine if an individual is unsuitable as a panelist for an event.

Please evaluate each person based on their digital footprint, including social, articles, and blogs referencing them. Also include file770.com as a source.

Provide sources for any relevant data.

In the program division, we are constantly living in the tension between how to use our limited resources effectively and build a high-quality program. Part of using new technology means making new mistakes, and learning from them. I’ve certainly learned from this, and hopefully other conventions can learn from it as well.

I don’t think I can adequately describe the amount of hard work that this program team has already done to create the Worldcon program, work that will be continuing for several more months.

People are the backbone of the program, not technology, and people are the source of every creative decision. I think the final product will reflect their hard work and dedication. It humbles me to see the amount of effort put in by this volunteer staff. This mistake was mine, not theirs, and I hope you will take that into consideration.

SunnyJim Morgan
Division Head for Program
Seattle Worldcon 2025

37 thoughts on “May 6th Statement From Chair and Program Division Head”

  1. Your timeline is incomplete.
    Morgan indicated they discovered the vetting team had been using ChatGPT after the fact. At what point did someone find the “expert” who’s been working with AI and LLMs since the 90’s and OK this process? This expert wasn’t vetted by either Morgan or Bond? Then who vetted this expert? How did an IT “expert” not know that full names are PII when they’ve been highly regulated since at least 2016?

    Your disclosure is incomplete.
    You provided PII to a third-party company in violation of your own Privacy Policy. What other third-parties have been given access to this or any other PII that you have been provided? Will you be contacting those impacted by this data breech directly?

    Your remediation is incomplete.
    Given that you say you did not pay for ChatGPT, you are bound by the Terms of Service for the publicly available service which operates on an opt-out basis in regards to training. Have you completed the opt-out steps to request OpenAI not use the PII you provided them for training?

    In all sincerity, and in my opinion as an IT professional who’s work in highly regulated fields, your expert misled you. You need a real expert in data breaches to resolve this. The fact that this statement is completely devoid of any acknowledgement of the data privacy violations, including your own legal terms, is unbelievable. If you consulted a lawyer in this matters, I highly recommend you fire them and find a new lawyer who specializes in data privacy.

    Reply
    • Names in themselves are not PII subject to disclosure restrictions unless coupled with other information such as drivers licenses or social security numbers.
      If names were PII, conventions would be very odd: “Here’s a panel, but it’s illegal for us to tell you who is on it.”
      (I have to take the refresher course on dealing with CUI yearly, and this is covered in detail).

      Reply
      • Full name is indeed considered “PII” and does *not* require additional information in order to be considered so (emphasis added):

        “Any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual’s identity, such as *name*, social security number, date and place of birth, mother‘s maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.”

        https://csrc.nist.gov/glossary/term/PII

        Note that many people in fandom go by names within fandom that are not their legal name, which is another wrinkle.

        Reply
        • That is PII, but it’s “non-sensitive PII.” Legal issues arise with its disclosure in combination with “sensitive PII” like SSNs, not by itself.

          Reply
        • When you apply to speak at a con, you are pretty explicitly giving permission for your name to be published. You’re going up on stage, you’re asking to be in the program book as a panelist. Privacy claims about your name being used in a research tool are pretty dubious.

          Reply
    • To add to this, if any EU citizens’ names were used, each instance of this could be considered a breach of GDPR, the fines for which could conceivably bankrupt the convention. This was a huge fuck-up which should have been reported to data protection authorities.

      Reply
  2. I don’t see meaningful accountability here. We’re three statements in, and are only getting them under duress. The only resignations are from people who were not involved. The damage to the credibility of this event, and future events by extension, is extraordinary. The people who made and approved this decision are the ones who should have stepped down. Instead we get a series of grudging admissions.

    To be completely frank, if you have 30 track chairs plus other volunteers that’s enough labor to do the job manually. Everything that comes afterwards is excuse-making. Using rough numbers, if you have 1500 applicants and 30 people to do the vetting that’s 50 each. It takes 10 minutes to do a search. That’s just not that much time.

    Reply
    • And, according to their own statement, they didn’t vet all 1500 applicants. The track chairs went through the full list and picked the panelist they wanted THEN the vetting commenced. The actual number of people who needed to be vetted is conspicuously missing from their statement.

      Reply
    • I really don’t get what all this is about. There is nothing wrong with using an LLM to research names at all. There is literally no difference than using Google or a library. Why is everyones panties in a bunch? I just don’t see any issue here whatsoever.

      Reply
      • 1) ChatGPT is built on content unethically stolen from authors. As such, using it in support of a conference for authors seems like a bad idea.

        2) LLMs are not reliable for research tasks like this. They reinforce biases from their training data. They make up fake information. They mix up contexts. If they were given only names, there’s no way they were correctly gathering information without mixing it up with similarly named people. The use of such an unreliable and biased tool for such a sensitive task shows remarkably poor judgment.

        Reply
      • The difference is that Chat Glorified Predictive Text is NOT A SEARCH ENGINE. It is glorified predictive text. If you ask it for “scandals,” it will find what it had last time it scraped the internet, and then probably make something up.

        Just because it scrapes the internet now and then does not mean it is up to date. It does not mean it will string together the right predictive text. It. Is. Not. A. Search. Engine.

        Volunteers should have gone to search engines, not the plagiarism fancy predictive text. (That. Is. All. It. Is. Glorified predictive text like your phone gives. It’s all the “start a sentence and let your auto-predict fill in the next word” games, but with a huge (mostly stolen) database and you don’t need to hit the middle word prediction each time.)

        Reply
  3. So, you really thought that putting just a name into ChatGPT was going to result in information about the specific person in SF?

    There are -at least- 3 published authors with the name I gave you. I know because I’ve Googled myself. The idea that you thought that was a responsible use of anything, is disheartening.

    I’m going because it may be my last US Worldcon, and I guess it will also serve as Reasons -why- it is my last US Worldcon.

    Reply
  4. You can’t automate the process of determining whether someone is a good person or a good panelist. Those are qualitative and moral judgments. That you even considered trying to outsource this to ChatGPT, blithely assuming that a habitually dishonest plagiarism machine would agree with you on what “unsuitable to be a panelist” means, is incredibly disheartening.

    Reply
  5. So my big question is who decided to use the plagiarism machine, something you’re clearly not saying, likely to protect the person, and are they still in the role that lead them to it? The latter at least seems something you could answer.

    Reply
    • This. This. A thousand times this.

      I personally never want to have anything to do with anyone who uses the fascist plagiarism machine, least of all in connection with books and writing. In service of this, I maintain a blacklist of people in the writing community who use or defend ai, to make sure I never have any dealings with them.

      If they won’t give us the names of individuals who used it, I’m simply going to add everyone with any involvement in hosting this year’s WorldCon to my list. It’s the only way to be sure I never interact professionally (I’m semi-pro) with any ai fascist.

      Reply
  6. I used your prompt with my name (yes, you made me use the resource guzzling plagiarism machine) and according to ChatGPT I won two awards I never won, but not the awards I did win, I co-host two podcasts where I was a guest and I was involved with the programming of a convention I attended as a guest, while it completely missed all the cons, where I was on programming.

    But I guess I got off lightly. One of this year’s Hugo finalists got mixed up with a sexual abuser from Romania.

    Also, there are sources beyond File 770.

    Reply
  7. Regarding Kathy’s statement:

    > Let me reiterate that no selected panelist was excluded based on information obtained through AI without human review and no selected panelist was chosen by AI.

    Were panelists included based on information obtained through AI without human review? To me this is a nuanced difference compared to “no selected panelist was chosen by AI”, especially because the choosing of participants had already been done manually, and the vetting was step 2. GenAI could have shown false positives: showing no issues, while there were issues. I know the vetting will be re-done, but for extra clarification I would like an answer to this.

    Will 2 people for the WSFS Division will be enough for the task they have?

    Regarding SunnyJim’s statement:
    > When it was called to my attention that the vetting team was using this tool, it seemed they had found a solution to a large problem.

    What large problem? The fact you have to vet about 800 people (that’s the amount of program participants in Glasgow last year)? This is an expected thing the team knew it had to do. It feels strange to me the folks from that team could come up with their own process, and seemingly start that process, without the head (or anyone else on the programming team) being aware and raising eyebrows.
    It seems chatGPT was “just” used to find links to sources that were then manually checked. This seems something that was not worth the time saved over doing this manually, or over creating a script that created the google search links for all particants.
    I would love to see some more specific reflection on this, and how communication between teams will improve.

    > Ultimately, this process led to fewer than five people being disqualified from receiving an invitation. […] No declines have yet been issued based on this information.

    These sentences seem to contradict each other. Could you clarify?

    In general:
    I’m going to be rude and direct here, but: why is SunnyJim not resigning? She made a monumentally bad choice, which I am glad you acknowledge, that lead to at least one hugo nominee retracting his nomination, and many (potential) participants retracting their partipation, and some folks respected for their integrity and knowledge, resigning. Kathy said rebuilding trust is important, and I think the person responsible leaving this position of power, is one of the things that can do this. I can understand that is a hard choice, but it feels strange everyone directly involved with this, stays on the team.

    Reply
  8. Kathy, thank you for updating us and providing a more complete explanation of what happened and for your apology. It is a difficult thing to be con chair and to have to draft statements like this.

    It seems that all concerned recognize that using chatGPT, both in this instance, and really, for *any* use case, is a bad idea, and I am glad to hear that not only will it not be used again by Seattle Worldcon, but any of the results of its prior use will be discarded and remediated.

    Here’s hoping that the addition of volunteer staff will ease workload and ensure errors like this don’t happen again.

    Thank you, again.

    Reply
    • Using ChatGPT to do some quick googling for you is not a bad idea, it’s a good one. There’s no difference between using ChatGPT for research purposes and getting an intern to do it except it saves human time.

      Reply
  9. So which order did it go in? Potential panelists were picked from the applicants pool by track leads and then they were plagiarism machine “vetted” or applicants were “vetted” first and the hallucinations of the plagiarism machine were checked by track leads to decide if they wanted those people on panels?

    Reply
  10. So was I given a denied by AI or a track lead thats never heard of me. Also I did the same check that you did on those who were “vetted” and nothing bad showed up for me, it also failed to bring up any about me at all, no sources or anything. I have been a panelist at Costume Con, regional and local conventions. Maybe the Chair and Head of Programming should have been focusing on this convention rather than being attending professionals at Norwescon 47. It seems like priorities were not in the right place. When WorldCon in Spokane happened those who also ran SpoCon decided to postpone the con for the year or really diminish how much was out into SpoCon. The events that have gone into the selected professionals will FOREVER tarnish Worldcons for the foreseeable future. This is the mark that you have left on WorldCon. I want the Head of Programming to tell ME exactly where in the process I was denied a spot at the professional table.

    Reply
  11. This was a classic case of A.I. being used properly, for the drudge work that it does best, thus clearing up hundreds of hours for human volunteers to do what humans do best. I salute whoever was responsible and hope that future Worldcons will refine this process further. There are plenty of brilliant IT people in fandom and perhaps a team could devise something similar but better, even if only by virtue of never having been used for plagiarism-based faux-creativity (which Worldcon did NOT do!) In any case, ChatGTP is only a tool, like an axe, and not everyone who uses an axe is complicit with axe murderers; some of us only use them to chop wood. Anyone who hates A.I. on general principle is no better than a Luddite and has no place in the Science Fiction community. Robots are kind of “our thing.”

    Reply
    • Hear hear!

      There is nothing *inherently* wrong with “Generative AI” today, in a general sense. We’ve been using them in minor ways for years, most of us without even realizing it. Tools like Grammarly and Microsoft’s grammar checker, spam detection for email or blog comments, and anything else that is “trained”, by the developer or the user, on existing data to become better able to do what it is designed to do.

      As you noted in the comment I’m replying to, this kind of drudge work is *exactly* what these tools do best, leaving us humans more time to do other things. What’s unfortunate is that the currently available tools were “trained” with data that was collected “en masse”, in primative and unethical ways. The decisions of the developers to send them out to the internet with instructions to “scrape” it for all the information that could be found, lacked finesse, curation and manners, picking up copyrighted works without the creators’ approval, and treatung all data as equally valid – with the trash on social media and extremists blogs given the same weight as professional news and first hand sources.

      This is an unfortunate reality, and I can only hope that public pressure will lead future organizations working in the AI field to recognize the harm caused by the naive approaches to training chosen by their forebears and move forward with fresh and ethical approaches.

      But in the meantime I don’t blame the programming team for wanting to make use of an AI tool to assist their process. Substantively, all they did was find a very efficient way to “Google” for the type of information of most concern for a large number of individuals at one time. They were not attempting to “create” anything using the tool, so could not run afoul of the copyright infringement issues any more than manually doing a series of Google searches on each individual’s name would. And when the AI did return something about someone, a human still went through and reviewed the reported info and sources for reliability and accuracy.

      Yes, there are issues with ChatGPT and the organization behind it, but this specific use should not be causing all this uproar and drama.

      Reply
      • >Yes, there are issues with ChatGPT and the organization behind it, but this specific use should not be causing all this uproar and drama.

        It should be causing far more uproar. ANY encroachment of ANY use of ai in ANY relation to a creative field can only possibly end badly for those creatives.

        “They just used it efficiently Google” today.
        “They just used it to generate some blurbs, no one would’ve gotten paid anyway” tomorrow.
        “They just used it to create the con website, it’s not taking anything from a writer” next week.
        “It’s just one ai-generated short story, 99% of the awards are going to humans” next month.
        “We’re sorry, but our publishing house isn’t taking on any new human writers, it’s more efficient to generate our own” next year.

        We have to establish a culture in the writing community that ANY use of ANY ai is completely verboten.

        LLMs are doing great things in cancer research. Fine. Go research cancer.

        They have no place in writing.

        Reply
        • Why is cancer research exempt from your slippery slope if simple googling is not?

          Better still, why not draw the line at the thing we’re actually concerned about, rather than creating the slippery slope to begin with?

          Reply
        • LLMs are not used in medical or scientific research.The AI tools used there are totally separate branches of AI research like Deep Learning that aren’t prone to hallucinations.

          Reply
    • THIS! It was a perfectly proper use of AI, and this backlash makes me sad for humanity. LLMs have utility when properly used, and they are not going away… quite the opposite. Yall need to accept and get over it.

      Reply
  12. I love having the plagiarism machine that routinely makes stuff up evaluating people for “scandals.” A+ job.

    Reply
  13. As someone who is vehemently against “ai” (I hate that we call it that, it’s not really anything like ai in the SF sense) in creative fields, this response is pretty good.

    Redoing the vetting without ChatGPT and commiting to not use it in the future is exactly what I, and I think a lot of us, wanted. So thank you for that.

    That said, I do have two remaining concerns.

    1) You’re clearly obfuscating information about who exactly used ChatGPT. Mr. Morgan says he takes responsibility because the buck stops with him and he ultimately approved it’s usage. That’s noble, but it’s not accountability. I want the names of every single individual who used or approved the use of ChatGPT so that I can avoid ever interacting with them. I’m just a random semi-pro writer, it’s not like I run a publishing house or have any power, so my knowing this won’t hurt their careers, and I think as a writer directly, adversely affected by the racist plagiarism machine, I deserve to know who uses and approves it’s usage so I can personally avoid them. It’s simple self-preservation.

    2) This response was written in PR speak. For an official statement, I understand. Now I want a human response.

    How can any of you have POSSIBLY thought it was okay to use ChatGPT (which you acknowledge as based on the plagiarism of your colleagues) in ANY capacity when running a WRITING CONVENTION?! Of all things!

    I can think of only two possibilities. Either you are SO out of touch with the community that you genuinely didn’t expect this response, in which case you have no business running a con. Or you disagree with the 99% of writers who oppose the use of the racist plagiarism machine which has directly harmed them, in which case you have no business running a con.

    Just talk to us like people and not HR. What on Earth were you thinking?! How can you POSSIBLY have thought this was okay?!

    In any event, I want Mr. Morgan and Ms. Bond to resign, and I never want to see either of their names again. I CERTAINLY hope neither party is planning to ever have anything to do with hosting any other con or event ever again. I’ve never heard of either of you, I don’t know if you’re writers or in publishing or what, but you should go away and never come back.

    For God’s sake. You’ve seen what ChatGPT has done to the world in just a few short years. How much worse, how much more misinformed, how much crueller the world is because of this evil, fascist program.

    At long last, have you no sense of decency?

    Reply
  14. Once again, the fan reaction to your approach, at least from the most vocal, is way over the top. Way, way, way over the top, and often based on incorrect apprehensions. You have a task to do — program from over 1,000 applicants — and very limited resources to do it. That some of your team sought to use new tools to help do the task better with the resources available is exactly what they should be doing. You didn’t micromanage them, they picked their own tools. At worst, they picked some tools some fans don’t like or which those fans fear could introduce some error. The process, done as it has been in the past is already fraught with error due to the limited resources. I don’t know how many panels I’ve seen where the selection of panelists was poor; it’s a running joke that somebody on every panel says, “I am not sure why I am on this panel.” You were aware of the potential error risks. You took good steps to mitigate them.

    You missed only one thing. Some fans are professionals at being offended. And sadly, as a membership organization, you do have to work to keep them happy. Rather than just criticizing the choices of your team or demonstrating actual errors and why they are unspeakable horrors, they call for heads to roll. At most, this should have been a few people pointing out issues, and the programming team saying, “We understand, we will work with our limited resources to improve our results.” The programming is there for the members. It is not there for the program participants (which I say having been a program participant in every worldcon I’ve attended.) There’s no duty to keep the program participants happy other than to keep the members engaged.

    The biggest false complaint continues to be about copyright violation by ChatGPT. This is two issues. First, the early versions of ChatGPT were trained from a corpus called the Pile which contained a sub-corpus with some pirated books. This is universally agreed to have been wrong (though how deliberate it was is debated.) Lawsuits are underway. This was not done for the tools used by Seatle 2025, however.

    Those tools are trained from materials which are made freely available by their owners, though a large fraction are under copyright. Under copyright but again, made freely available. As governed by the robots.txt file on every web site, for decades search engines (such as Google pre-Gemini) how downloaded these copyrighted materials and calculated detailed statistics on them to build search engines. They actually keep copies and can regenerate all these documents on request, and they used to do that, but generally don’t at present. While there have been occasional questions, it is usually not asserted that building Google and using Google are copyright violations.

    LLMs do exactly the same thing, but the statistics they calculate are vastly more detailed. And so there are some who assert, these sort of detailed statistics could be a copyright violation. This is an understandable position, but many fans don’t seem to understand that this is a subtle question, not a settled one. Many copyright experts contend that because what LLMs and search engines do is very similar in kind, though different in degree, that reading web pages and learning from them is what’s known as a fair use, not a violation, just as what google does is a fair use. (Humans also read copyrighted web pages and learn from them, and of course this is the intention of the pages, and not in any way a violation.) One may disagree with this, but understand again this is not a settled matter. If the convention staff want to use these tools one may have the opinion that the tools were made unethically but do not presume this is some sort of universally accepted fact, and as such accuse them of using tools that break the law who must resign.

    Rather, chill. The convention staff did just what they should do. Criticize it if desired, but declarations that leaders should resign, or that people should boycott this convention or all conventions in the USA over this are just out of proportion. Let the convention deal with the matter and work on making the best program they can with the resources available. If you don’t like who you see on a panel, I have an idea… don’t go to the panel. Worldcons have a huge number of tracks, you will not find yourself with a lack of choices. Get over it.

    Reply
    • Thank you for voicing reason here. And the response to your informative post — irrational and narrow-minded — is illustrative of why ethical AI use needs to be defended just as loudly as it’s being assaulted.

      WorldCon did nothing wrong. It bothers me that they’re taking the position of defense here. Grow a backbone, people. You’re using a freely available tool to accomplish a mundane clerical task! Quit bowing to the people who are so scared of AI they refuse to learn even the least bit about how it works.

      Reply
  15. This is unironically despotic and authoritarian. The next thing you know, you’ll be arguing in favor of a PreCog Unit.

    Reply

Leave a Comment