Generative Artificial Intelligence Policy Feedback
Sharing this feedback I sent on my blog. Commentary is still open until late May, if you have an opinion!
Hello all,
I am writing in response to this request for commentary: https://www.sca.org/news/new-addition-of-a-policy-on-generative-artificial-intelligence-ai-and-plagiarism-commentary-request/
Overall, I am strongly in favor of having a policy for the use of generative AI within our Society which restricts our usage of it to ethical uses and can be quickly adapted in handbooks as things change. I like the choices in this particular policy draft overall. I would recommend the following changes to this draft:
A.1: I do not understand after reading this policy if AI generated art is permissible on social media / for advertising if it is clearly labeled as such and not presented as original. I feel strongly that it should not be used in this context and that this decision should be explicitly written in this top level policy and not delegated to the social media handbook, since the top level policy requires a commentary period to change, unlike the social media handbook, and I feel that this commentary period provides necessary friction for such an impactful change.
A.2.E: “Translate text between languages” - I believe that AI-translated text should not be considered an accurate translation until it has been reviewed/rewritten by a human who understands both the source and destination languages.
This is relevant to us in Canada where French is an official language and where both official languages are required to be used in some contexts. On a personal note, as an English speaker who recently moved to Canada, I have spent a large part of the last year learning French, and based on that personal experience I don’t trust machine translation to accurately capture nuance all of the time. At the same time, I know translation is a skill that not everyone has, and accessibility of information in a timely manner is also important. So, my proposal is that all AI generated translation is clearly marked as such (which is already in this policy) and that AI generated translations are rewritten by humans before being labeled as official messages (e.g. messages from leadership).
A.2: One use which I personally think should be explicitly called out is the use of AI in automated notetakers. For context, I am currently employed in mundane life as a software developer by a startup which provides AI tools for financial advisors, and a large part of our product is AI transcription of online meetings and AI summarization of that transcript. I think there should be a specific policy here as anyone can send a notetaker to online meetings like local business meetings or the quarterly board meeting if they have the link, and it seems fair to have a widely known logic behind admittance/denial to act based on, rather than as a snap decision for each meeting. I would lean towards banning AI notetakers in our policy - my company’s product explicitly sets limitations on whether our data is used for retraining of models, and explicitly requires consent of all attendees (tracked by the notetaker owner), but I do not believe that all AI notetakers opt out of retraining, nor that all attendees in an SCA context have provided/will provide their consent, nor that notetaker owners will track that consent. Also, again on a personal note, I find that when I rely on my AI notetaker at work I think less deeply about meetings as I know the notetaker will catch what I missed - but this means that I am more likely to miss important information, and don’t always review the notetaker output to realize I have missed something.
Thanks for your consideration. As always, I am available for follow up questions as desired, and I would appreciate acknowledgement that my message has been received :)
Noble Anne of Østgardr, currently resident of Ealdormere member # and expiry / rest of signature block omitted here