FAQ -Generative AI and Signpost
Prepared by the Signpost Global Team July 2023, Updated March 12, 2024
Background:
With the launch of chatGPT in late 2022 and Gemini in Winter 2023/2024, there has been increasing talk about AI in the press and workplace. There is a tremendous amount of excitement and optimism related to the possibilities that generative AI affords, but also significant fears related to safety and impact on employment, and larger societal questions such as effect on climate.
Given the work that we do at Signpost, questions surrounding how we will use AI are frequent. We have received many questions from Signpost implementing teams, donors, and partners over the course of 2023 and 2024. This FAQ offers direct responses to the questions that are most frequently asked by Signpost implementing teams. The responses reflect our thinking currently, as we continue to learn, and as the technological context continues to evolve.
Frequently Asked Questions:
Question: Will AI replace my job or future Signpost instances?
- No. We are testing possible ways in which AI can help with the efficiency of teams so that we are able to respond to more requests and create content but AI will not replace existing staff or replace the need for human moderation and content creation.
- Our analysis is that AI driven chatbots may be sufficient to effectively meet the complex needs of the populations we serve for certain questions in certain languages where there is well developed content- but with more capacity we will aim to reach more people using ad targeting and less restrained posting. Furthermore, chatbots are only able to respond to what they are taught, if there is no content known by AI then it cannot perform. This means new inquiries and new content will not be answerable by AI.
- We are in the process of determining whether it is possible to “do no harm” with bot driven responses to real queries. Currently we are testing using simulated inquiries in a safe sandbox.
- Signpost champions a human touch with information and community engagement and recognizes the importance of community / human connection with responding to questions and creating community-driven information products.
Question: What is a chatbot? Do we use them anywhere?
- A chatbot can take many forms. In simple terms, a chatbot can offer pre-programmed content, or computer generated content that answers to human inputs / questions.
- Manual: These are bots that offer a menu of options, e.g. a phone tree, where the human user of the bot selects an option to follow from a limited range of choices.
- Some teams work with these automated menus already.
- Suggestive: This would be a bot that would sit alongside humans interacting with one another, that would suggest responses to questions in order to help the humans find the best solutions.
- There are suggestive bots within Zendesk that some Signpost teams already use.
- Generative: This would be a bot that sends answers that the bot derives from a set of content that it has access to on its own.
- This is what Chat GPT and other LLMs, or Large Language Models are. It is not being used by any teams for interactions w/ Signpost clients. It is however, being tested by Signpost to see whether it is possible for generative AI to support moderators by providing suggested responses.
Question: Can I build a chatGPT or Gemini (generative) chatbot to respond directly to clients? Is it safe?
- Potentially, but not until we adequately test it. It is not possible to safely build a bot to reply directly to clients with precise and accurate information without significant development and derisking. The Signpost development team is beginning this work but it will take significant time and effort.
- While some replies from a properly trained bot may be accurate and similar to a human moderator response, many others would be incomplete information, or wrong information. It is not safe to expose clients to AI generated information unless there is a moderator reading responses and rigorously verifying this information before it is disseminated, or substantial and controlled testing using quality content and development. Both incomplete and incorrect information count as misinformation and could be quite harmful to individuals and communities.
- There is potential that future iterations of generative chatbots could be safe using the following practices:
- Potentially Safe:
- Fine Tuned: A potentially safe bot may need to be “fine tuned” for the work that we do. This means that we “teach” it our information and preferences to become more accurate based on classified data we “feed” it. Training would have to be extremely precise and data security of the model would need to be ensured according to our strict standings. Rigorous testing would need to be conducted in a safe (not live) environment to ensure that tuning was successful. It is important it does not train on the personal data of our clients without significant investment into data protection, legal, and consent implications. As mentioned above, we also would not envision this replacing the need for human interaction, but rather may allow us to more quickly address certain types of client concerns and save time for agents to address the most complex concerns.
- Knowledgable: By connecting Signpost content to a bot you can yield roughly 80% helpful inquiries in select languages like English, Arabic, French, and Spanish. However, the 20% of unreliability presents significant concern and needs to be derisked appropriately using scientific approach.
- Prompted: Means that it has been given rules that define what it can and can’t say. It identifies itself as a bot so people know they’re not speaking with a person. It does not generate responses outside of the content that it has been told to use by us.
- Not Safe: Untrained, has no rules about how to act, does not identify itself as a bot, trains on personal data etc. This means it is not safe to use Chat GPT, Copilot, or Gemini without fine tuning, prompting, and knowledge.
Question: Can the technology team build us a (generative) chatbot?
- Signpost is evaluating the safest and best ways to deploy more advanced AI tools in the near and long-term future. We are not currently building AI-powered bots for each country team. We are doing research and development into what is possible and what a future architecture for generative bots could look like. We are focused on exploring how AI can support the work that is done by Signpost personnel to save time in processes, boost efficiency, and improve our work.
Question: Can I use ChatGPT to help me write content? Is it accurate? How much should I check?
- In following journalism standards in our editorial practices, AI cannot play a role that would replace editorial personnel’s work process. It is a tool, like others, and guidance from Signpost Humanitarian Content and Social Media Advisor should be sought if teams are considering its implementation.
- For content creation and management, we will follow the Signpost editorial guidelines here.
- Signpost content must be accurate and updated. Some AI generated is accurate, but other AI generated content is not. It is often difficult to decipher when information is inaccurate without thorough checks because AI mimics patterns of trustworthy information.
- Signpost provides contextual information specific to certain geographic locations. AI generated content may lack contextualization and need-based information.
- AI collects content for you from existing information scraped from the internet (before September 2021 for ChatGPT), meaning that there is a chance of duplicity and copyright issues of AI generated content and that new topics may not be within its memory.
- AI can support on suggesting Article Titles, keywords, and content headlines. If teams are considering using AI to support SEO, please refer to Signost SEO guidance here.
Question: What is Zendesk doing with AI and will we be able to use it?
- Zendesk AI: Zendesk has released its own AI tools. Within the upcoming months you are most likely to hear about possible implementations of Zendesk AI (language detection, sentiment detection, intent detection, macro suggestion, macro creation, ticket summarization) and early design of chatbot frameworks. However, in it’s current configuration Zendesk AI does not consider our business use case and therefore the AI is yet to be easily accessible and useful for our work. It is possible that Zendesk AI becomes more accessible for our use case but as of now the technology is not well suited to our needs until there is the ability for us to customize our own training of our Zendesk AI.
- The suite of Zendesk AI tools managed and developed by Zendesk include the following features which are most useful for us:
- Intelligent Triage and Context Panel: The AI will help interpret what the ticket is about, the sentiment / mood of the client, and other details like the language.
- If this feature includes analysis of our custom form fields such as category, it could streamline workflows and improve efficiency of moderation work, including auto-update parts of the ticket form.
- Article Recommendations: Recently fortified by OpenAI Zendesk uses an article recommendations feature in chatbots to recommend articles from the help center to anybody utilizing the chatbot.
- Expand: Adds additional language to the content of the agent’s comment.
- This could help moderators/community liaisons offer more complete responses to complex topics. (Language limitations)
- Tone Change: Changes tone of messages to be more formal or less formal automatically without changing responses’ content meaning.
- This could help moderators/community liaisons ensure that the tone of messages is well considered for each client’s situation. (Language limitations)
- Ticket Summarization: Open AI reads your ticket, provides a summary, customer sentiment, and key information (which can pull from previous contact tickets).
- This could help make sense of larger sets of tickets and interactions to better understand the context of clients needs at a greater level.
Question: How is Signpost going to use AI in the future?
- Here are some of the possibilities we are looking into testing, noting that the outcome of the tests are uncertain and the goal is to identify efficiency for clients and teams:
- Chatbots
- Using AI to create service mapping entries from service mapping and open web data
- An bot companion to help find information on our website
- Reformatting content to make social media posts with more “viral” keywords
- Automatic meta tagging, keywording, and search improvements
- Generated visual graphics and multimedia content to create a Signpost content library
- Creating synthesized news content from selected trusted sources
- We expect that our use of AI will evolve over time and as the technology matures. Signpost is reviewing options to use AI for the features mentioned above as they are released and researching other technologies along the way. Quality of programming, safety, and security remain at the core of all research.
- We will be evaluating potential applicability of AI against the best practices that inform our programming approaches as well as rigorous digital protection and ethical standards.
- We are very open to ideas from implementing teams and will likely be asking for teams to help us test and pilot potential applications of AI for Signpost!
Question: When it comes to AI, what is Signpost excited about? What is it worried about?
We are generally excited about the following:
- Possibility to reach more people with AI powered chat.
- Possibility to classify data faster than ever before.
- Possibility to simplify work of moderators/community liaisons to improve efficiencies, quality, and work experience of personnel.
- Possibility of better program quality overall.
- Possibility to improve site performance and reach.
We are generally worried about the following:
- Moving too fast and damaging program teams or clients / client communities.
- Moving too slow and being left behind by the rapidly evolving tech landscape.
- Proliferation of misinformation using bots and the possibility of encountering more common “hallucinations” (a term for an AI creation of seemingly true, but false information)
- A general sense within the aid sector that humans are no longer needed now that we have smart bots.
Vous devez vous connecter pour laisser un commentaire.
Commentaires
0 commentaire