FAQ -Generative AI and Signpost
Prepared by the Signpost Global Team July 2023
Background:
With the launch of chatGPT in late 2022 and BARD in March 2023, there has been increasing talk about AI in the press and workplace. There is a tremendous amount of excitement and optimism related to the possibilities that generative AI affords, but also significant fears related to safety and impact on employment, and larger societal questions such as effect on climate.
Given the work that we do at Signpost, questions surrounding how we will use AI are frequent. We have received many questions from Signpost implementing teams, donors, and partners over the course of 2023. This FAQ offers direct responses to the questions that are most frequently asked by Signpost implementing teams. The responses reflect our thinking currently, as we continue to learn, and as the technological context continues to evolve.
Frequently Asked Questions:
Question: Will AI replace my job or future Signpost instances?
- No. We are testing possible ways in which AI can help with the efficiency of teams so that we are able to respond to more requests and create content but AI will not replace existing staff or replace the need for human moderation and content creation.
- Our analysis is that AI driven chatbots are insufficient to effectively meet the complex needs of the populations we serve - beyond this we don’t know if we can fully ensure “do no harm” with bot driven responses to real queries.
- Signpost champions a human touch with information and community engagement and recognizes the importance of community / human connection with responding to questions and creating community-driven information products.
Question: What is a chatbot? Do we use them anywhere?
- A chatbot can take many forms. In simple terms, a chatbot can offer pre-programmed content, computer generated content, or answers to human inputs / questions.
- Manual: These are bots that offer a menu of options, e.g. a phone tree, where the human user of the bot selects an option to follow from a limited range of choices.
- Some teams work with these automated menus already.
- Suggestive: This would be a bot that would sit alongside humans interacting with one another, that would suggest responses to questions in order to help the humans find the best solutions.
- There are suggestive bots within Zendesk that some Signpost teams already use.
- Generative: This would be a bot that sends answers that the bot derives from a set of content that it has access to on its own.
- This is what Chat GPT and other LLMs, or Large Language Models are. It is not being used by any teams for interactions w/ Signpost clients.
Question: Can I build a chatGPT or BARD (generative) chatbot to respond directly to clients? Is it safe?
- No. It is not possible to safely build a bot to reply directly to clients with precise and accurate information at this time.
- While some replies from a properly trained bot may be accurate and similar to a human moderator response, many others would be incomplete information, or wrong information. It is not safe to expose clients to AI generated information unless there is a moderator reading responses and rigorously verifying this information before it is disseminated. Both incomplete and incorrect information count as misinformation and could be quite harmful to individuals and communities.
- There is potential that future iterations of generative chatbots could be safe using the following practices:
- Potentially Safe:
- Trained: A potentially safe bot may need to be “trained” for the work that we do. This means that we “teach” it our information and preferences to become more accurate based on classified data we “feed” it. Training would have to be extremely precise and data security of the model would need to be ensured according to our strict standings. Rigorous testing would need to be conducted in a safe (not live) environment to ensure that training was successful. It is important it does not train on the personal data of our clients. As mentioned above, we also would not envision this replacing the need for human interaction, but rather may allow us to more quickly address certain types of client concerns and save time for agents to address the most complex concerns.
- Prompted: Means that it has been given rules that define what it can and can’t say. It identifies itself as a bot so people know they’re not speaking with a person. It does not generate responses outside of the content that it has been told to use by us.
- Not Safe: Untrained, has no rules about how to act, does not identify itself as a bot, trains on personal data etc.
Question: Can the technology team build us a (generative) chatbot?
- Signpost is evaluating the safest and best ways to deploy more advanced AI tools in the near and long-term future. We are not currently building AI-powered bots for each country team. We are doing research and development into what is possible and what a future architecture for generative bots could look like. We are focused on exploring how AI can support the work that is done by Signpost personnel to save time in processes, boost efficiency, and improve our work.
Question: Can I use ChatGPT to help me write content? Is it accurate? How much should I check?
- In following journalism standards in our editorial practices, AI cannot play a role that would replace editorial personnel’s work process. It is a tool, like others, and guidance from Signpost Humanitarian Content and Social Media Advisor should be sought if teams are considering its implementation.
- For content creation and management, we will follow the Signpost editorial guidelines here.
- Signpost content must be accurate and updated. Some AI generated is accurate, but other AI generated content is not. It is often difficult to decipher when information is inaccurate without thorough checks because AI mimics patterns of trustworthy information.
- Signpost provides contextual information specific to certain geographic locations. AI generated content may lack contextualization and need-based information.
- AI collects content for you from existing information scraped from the internet (before September 2021 for ChatGPT), meaning that there is a chance of duplicity and copyright issues of AI generated content and that new topics may not be within its memory.
- AI can support on suggesting Article Titles, keywords, and content headlines. If teams are considering using AI to support SEO, please refer to Signost SEO guidance here.
Question: What is Zendesk doing with AI and will we be able to use it?
- Zendesk AI: Zendesk has released its own AI tools. Within the upcoming months you are most likely to hear about possible implementations of Zendesk AI (language detection, sentiment detection, intent detection, macro suggestion, macro creation, ticket summarization) and early design of chatbot frameworks. Zendesk is providing us with a demo shortly and we are reviewing possible changes to our account to enable the Zendesk AI functionality based on a review of the tools safety, impact on data protection, effectiveness, and risk. The suite of Zendesk AI tools managed and developed by Zendesk include the following features:
- Intelligent Triage and Context Panel: The AI will help interpret what the ticket is about, the sentiment / mood of the client, and other details like the language.
- If this feature is safe and effective, it could streamline workflows and improve efficiency of moderation work, including auto-update parts of the ticket form.
- Advanced Bots: This AI tool can send automatic responses that are “trained” to respond to however the team wants. The AI would understand topic and intent, escalate to agents and offer some suggested articles (or other responses) based on its ability to understand the question and the training the bot has received - could function 24/7.
- Use cases would either require extensive testing and training, or the bot would offer a prescriptive (defined by the program team) reply.
- Macro Suggestions: A tool in the context panel that suggests the creation of new macros, and the selection of existing macros based on the intent of the user. This identifies knowledge gaps in macros and reduces the time spent on analysis.
- This tool will support faster workflow, admin processes, and help teams fully leverage content within a central knowledge base.
- Customized auto replies: New if Intelligent Triage is enabled. It can look at message intent and provide an autoreply configured in triggers based on your set office hours stating either that 1. We are in office and will get to your (custom question) shortly 2. We are our of office and will get to your (custom question) shortly. If enabled (and tested) it may be able to answer the question directly from our help center content and macros.
- Article Recommendations: Recently fortified by OpenAI Zendesk uses an article recommendations feature in chatbots to recommend articles from the help center to anybody utilizing the chatbot.
- OpenAI: A number of OpenAI apps have been integrated into Zendesk by Zendesk (not available in all languages) based on a global partnership that would support the work of an Agent / Signpost moderator.
- Expand: Adds additional language to the content of the agent’s comment.
- This could help moderators/community liaisons offer more complete responses to complex topics. (Language limitations)
- Tone Change: Changes tone of messages to be more formal or less formal automatically without changing responses’ content meaning.
- This could help moderators/community liaisons ensure that the tone of messages is well considered for each client’s situation. (Language limitations)
- Ticket Summarization: Open AI reads your ticket, provides a summary, customer sentiment, and key information (which can pull from previous contact tickets).
- This could help make sense of larger sets of tickets and interactions to better understand the context of clients needs at a greater level.
Question: How is Signpost going to use AI in the future?
- Here are some of the possibilities we are looking into testing, noting that the outcome of the tests are uncertain and the goal is to identify efficiency for clients and teams:
- Chatbots
- Using AI to create service mapping entries from service mapping and open web data
- An bot companion to help find information on our website
- Reformatting content to make social media posts with more “viral” keywords
- Automatic meta tagging, keywording, and search improvements
- Generated visual graphics and multimedia content to create a Signpost content library
- Creating synthesized news content from selected trusted sources
- We expect that our use of AI will evolve over time and as the technology matures. Signpost is reviewing options to use AI for the features mentioned above as they are released and researching other technologies along the way. Quality of programming, safety, and security remain at the core of all research.
- We will be evaluating potential applicability of AI against the best practices that inform our programming approaches as well as rigorous digital protection and ethical standards.
- We are very open to ideas from implementing teams and will likely be asking for teams to help us test and pilot potential applications of AI for Signpost!
Question: When it comes to AI, what is Signpost excited about? What is it worried about?
We are generally excited about the following:
- Possibility to classify data faster than ever before.
- Possibility to simplify work of moderators/community liaisons to improve efficiencies, quality, and work experience of personnel.
- Possibility of better program quality overall.
- Possibility to improve site performance and reach.
We are generally worried about the following:
- Moving too fast and damaging program teams or clients / client communities.
- Moving too slow and being left behind by the rapidly evolving tech landscape.
- Proliferation of misinformation using bots and the possibility of encountering more common “hallucinations” (a term for an AI creation of seemingly true, but false information)
- A general sense within the aid sector that humans are no longer needed now that we have smart bots.
Please sign in to leave a comment.
Comments
0 comments