Starbutter AI’s Ethics and Principles
Made with 💌 in California 🐻
Artificial intelligence (AI) is computer programming that learns and adapts from users. Starbutter uses AI in the fields of machine learning and natural language processing to make our virtual assistants helpful. In fact, the field of “conversational agents” (known informally as chatbots or virtual assistants) was one of the earliest fields of AI, with stars like Eliza, Cleverbot, and Mitsuku, and now Siri, Cortana, and Assistant.
Conversational agents are particularly important as human labor becomes more expensive due to demographics such as low birth rates and more higher education. Few humans want to do rote service tasks. This is something virtual assistants are well-suited to take over. Some estimates suggest up to 47% of US labor employment today will replaced in the next 10-15 years. We suspect that conversational agents will help take over many simpler cognitive tasks. As an example, we estimate there are 470,000 mortgage brokers in the US, and 45-65% of their time is spent answering the same rote questions. As Starbutter’s virtual assistants take over that space, humans can focus on the tougher questions of empathy, complex loan originations, and closing a sale, while leaving the rote work to the virtual assistant.
Below are our concrete standards that govern our research and product development and impact our business decisions. These are dynamic and evolving. We will approach our work with humility, respect, and trust, knowing that a single virtual assistant could interact with and serve millions or even billions of humans. So it’s important that our conversational agents be “guardian angels” that do what’s in our best interest, like a nonpartisan, professional caretaker.
We will assess our AI conversational agents using the following objectives. We believe that AI in our domain should:
1. Be socially beneficial and embed human values.
Conversational agents have huge potential to educate students, take care of the elderly, and help consumers in a range of transactions, from grocery shopping to online and financial product shopping. AI enhances our ability to understand the meaning of content at scale and serve it in meaningful and customized ways to consumers. We will honor and do our best to embody the cultural, social, and legal norms in the countries where we operate. Broadly we want to make assistants that do not take away human jobs, but augment humans and allow them to pursue other jobs that let them grow and evolve personally, mentally and professionally. For example, we see call center jobs evolving into AI trainer jobs.
2. Avoid creating or reinforcing unfair bias.
Conversational agents and their datasets can reflect, reinforce, or reduce unfair biases. Distinguishing fair from unfair biases is not always simple, while values differ across cultures and societies. We seek to avoid unjust impacts on people, particularly those related to characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief. We also strive to make our chatbots accessible to the majority of people, not just catering to smaller demographics, so that there is no unfair treatment or privilege to specific users.
3. Be built and tested for safety.
We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. We test chatbots in constrained environments and monitor their operation before deployment.
4. Be transparent and accountable to people.
We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our conversational agents will be subject to appropriate human direction and control. Our chatbots will identify themselves as non-human entities and have avatars and personalities that reflect that they are inspired by humans, but are not human.
5. Serve two parties in a fair way as a marketplace.
A common question is: “Does this chatbot serve me, or the service provider?” In other words, “Does this chatbot have MY interests at heart, or someone else’s?” Our chatbots will do their best to fairly serve two parties: users and advertisers. As a marketplace, we stand between the two and want both to get value and prosper. We will always customize responses and matches responsibly so as to represent the best interests of users first, and advertisers second (though we do take both sides into account).
Our agents may serve ads, which help pay for our free service. So any agent can use data you provided, either directly or through the API, to optimize these ads. We do our best to not serve ads unless we believe they have a clear purpose that benefits our user when served. Finally, we strive to create ethical partnerships with existing financial institutions and digital marketing platforms. We want to make sure that the people and corporations we partner with have similar core values and have a clear cut mission to improve the way the world lives, works, and plays.
6. Bring expert help with empathy while not misleading or deceiving.
Our agents do the best to bring a personalized, expert service or recommendation to users. Additionally, they will try to do this with empathy and not mislead or deceive their human users. Often this means we have to carefully construct a choice architecture to promote what’s best for the two sides we serve in a marketplace. We do our best to make our agents accurate and clear, and when they’re not, we promise to fix them in a reasonable time period. One quality of great experts that we strive to emulate: having humility and owning up to our mistakes and limitations.
7. Be transparent about what we keep private and what we don’t.
8. Do not injure a human being and respect conversational norms.
We will train our agents to the best of our ability to not injure humans, with the caveat that many bots are complex systems whose behavior cannot be fully predicted. Hence testing and safety periods are extremely important. Our agents will neither abuse humans, nor tolerate abuse in return. If a human abuses our agents, they will politely and firmly push back or disengage. Requests from users to end communication will be respected and we will have a protocol to end a chat to prevent any agent from harassing or spamming a user.