Human-in-the-loop is a model of interaction where a machine process and one or more humans have an iterative interaction. In this paradigm the user has the ability to heavily influence the outcome of the process by providing feedback to the system as well as the opportunity to grab different perspectives about the underlying domain and understand the step by step machine process leading to a certain outcome. Amongst the current major concerns in Artificial Intelligence research are being able to explain and understand the results as well as avoiding bias in the underlying data that might lead to unfair or unethical conclusions. Typically, computers are fast and accurate in processing vast amounts of data. People, however, are creative and bring in their perspectives and interpretation power. Bringing humans and machines together creates a natural symbiosis for accurate interpretation of data.
Crowdsourcing has become a successful method to obtain the human computation needed to augment algorithms and perform high quality data management. Humans, though, have various cognitive biases that influence the way they interpret statements, make decisions and remember information. If we use crowdsourcing to generate ground truth, it is important to identify existing biases among crowdsourcing contributors and analyze the effects that their biases may produce. At the same time, having access to a potentially large number of people can give us the opportunity to handle the biases in existing data and systems.
The goal of this workshop is to bring together researchers and practitioners in various areas of AI (i.e., Machine Learning, NLP, Computational Advertising, etc.) to explore new pathways of the human-in-the-loop paradigm. We aim to analyze both existing biases in crowdsourcing, and explore various methods to manage bias via crowdsourcing. We would like to discuss different types of biases, measures and methods to track bias, as well as methodologies to prevent and mitigate different types of bias. We will provide a framework for discussion among scholars, practitioners and other interested parties, including crowd workers, requesters and crowdsourcing platform managers.
Authors can submit four types of papers: