This weekend I attended WikiConference North America. I decided to go somewhat at the last moment, but am really glad I did. This is the first non-technical Wikimedia community conference I have attended since COVID and it was great to hear what the Wikipedia community has been up to.
I was on a bit of a budget, so i decided to get a cheaper hotel that was about an hour away by public transit from the venue. I don't think I'll do that again. Getting back and forth was really smooth - Toronto has great transit. However it meant an extra hour at the end of the day to get back, and waking up an hour earlier to get there on time, which really added up. By the end I was pretty tired and much rather would have had an extra 2 hours of sleep (or an extra 2 hours chatting with people).
Compared to previous iterations of this conference, there was a much heavier focus on on-wiki governance, power users and "lower-case s" Wikipedia (not Wikimedia) strategy. I found this quite refreshing and interesting since I mostly do MediaWiki dev stuff and do not hear about the internal workings of Wikipedia as much. Previous versions of this conference focused too much (imho) on talks about outreach which while important were often a bit repetitive. The different focus was much more interesting to me.
Key Take-aways
My key take away from this conference was that there is a lot of nervousness about the future. Especially:
- Wikipedia's power-user demographics curve is shifting in a concerning way. Particularly around admin promotion.
- AI is changing the way we consume knowledge, potentially cutting Wikipedia out, and this is scary
- A fear that the world is not as it once was and the conditions that created Wikipedia are no longer present. As they keynote speaker Selena Deckelmann phrased it, "Is Wikipedia a one-generation marvel?"
However I don't want to overstate this. Its unclear to me how pervasive this view is. Lots of presenters presented views of that form, but does the average Wikipedian agree? If so, is it more an intellectual agreement, or are people actually nervous? I am unsure. My read on it is that people were vaguely nervous about these things, but by no means was anyone panicking about them. Honestly though, I don't really know. However, I think some of these concerns are undercut by there being a long history of people worried about similar things and yet Wikipedia has endured. Before admin demographics people were panicking about new user retention. Before AI changing the way we consume content, it was mobile (A threat which I think is actually a much bigger deal).
Admin demographics
That said, I never quite realized the scale of admin demographic crisis. People always talk about there being less admin promotions now than in the past, but i did not realize until it was pointed out that it is not just a little bit less but allegedly 50 times less. There is no doubt that a good portion of the admin base are people who started a decade (or 2) ago, and new user admins are fewer and further between.
A particular thing that struck me as related to this at the conference, is how the definition of "young" Wikipedian seems to be getting older. Occasionally I would hear people talk about someone who is in high school as being a young Wikipedian, with the implication that this is somewhat unusual. However when you talk to people who have been Wikipedians for a long time, often they say they were teenagers when they started. It seems like Wikipedians being teenagers was a really common thing early in the project, but is now becoming more rare.
Ultimately though, I suspect the problem will solve itself with time. As more and more admins retire as time goes on, eventually work load on the remaining will increase until the mop will be handed out more readily out of necessity. I can't help but be reminded of all the panic over new user retention, until eventually people basically decided that it didn't really matter.
AI
As far as AI goes, hating AI seems to be a little bit of a fad right now. I generally think it is overblown. In the Wikipedia context, this seems to come down to three things:
- Deepfakes and other media manipulation to make it harder to have reliable sources (Mis/Dis-information)
- Using AI to generate articles that get posted, but perhaps are not properly fact checked or otherwise poor quality in ways that aren't immediately obvious or in ways existing community practice is not as of yet well prepared to handle
- Voice assistants (alexa), LLMs (ChatGPT) and other knowledge distributions methods that use Wikipedia data but cut Wikipedia out of the loop. (A continuation of the concern that started with google knowledge graph)
I think by and large it is the third point that was the most concerning to people at the conference although all 3 were discussed at various points. The third point is also unique to Wikipedia.
There seemed to be two causes of concern for the third point. First there was worry over lack of attribution and a feeling that large silicon valley companies are exploitatively profiting off the labor of Wikipedians. Second there is concern that by Wikipedia being cut out of the loop we lose the ability to recruit people when there is no edit button and maybe even lose brand awareness. While totally unstated, I imagine the inability to show fundraising banners to users consuming via such systems probably is on the mind of the fundraising department of WMF.
My initial reaction to this is probably one of disagreement with the underlying moral basis. The goal was always to collect the world's knowledge for others to freely use. The free knowledge movement literally has free in the name. The knowledge has been collected and now other people are using it in interesting, useful and unexpected ways. Who are we to tell people what they can and cannot do with it?
This is the sort of statement that is very ideologically based. People come to Wikimedia for a variety of reasons, we are not a monolith. I imagine that people probably either agree with this view or disagree with it, and no amount of argument is going to change anyone's mind about it. Of course a major sticking point here is arguably ChatGPT is not complying with our license and lack of attribution is a reasonable concern.
The more pragmatic concerns are interesting though. The project needs new blood to continue over the long term, and if we are cut out of the distribution loop, how do we recruit. I honestly don't know, but I'd like to see actual data confirming the threat before I get too worried.
The reason I say that, is that I don't think voice assistants and LLMs are going to replace Wikipedia. They may replace Wikipedia for certain use cases but not all use cases, and especially not the use case that our recruitment base is.
Voice assistants generally are good for quick fact questions. "Who is the prime minister of Canada?" type questions. The type of stuff that has a one sentence answer and is probably stored on Wikidata. LLMs are somewhat longer form, but still best for information that can be summarized in a few paragraphs, maybe a page at most and has a relatively objective "right" answer (From what I hear. I haven't actually used ChatGPT). Complex nuanced topics are not well served by these systems. Want to know the historical context that lead to the current flare up in the middle east? I don't think LLMs will give you what you want.
Now think about the average Wikipedia editor. Are they interested in one paragraphs answers? I don't know for sure, but I would posit that they tend to be more interested in the larger nuanced story. Yes other distribution models may threaten our ability to recruit from users using them, but I don't think that is the target audience we would want to focus recruitment on anyways. I suppose time will tell. AI might just be a fad in the end.
Conclusion
I had a great time. It was awesome to see old friends but also meet plenty of new people I did not know. I learned quite a bit, especially about Wikipedia governance. In many ways, it is one of the more surprising wiki conferences I've been too, as it contained quite a bit of content that was new to me. I plan to write a second blog post about my more raw unfiltered thoughts on specific presentations. (Edit: I never did make a second post, and i guess its kind of late enough at this point that i probably won't, so nevermind about that)