Information Trustworthiness - AAAI 2013 Tutorial

Instructors:

Jeffrey Pasternack, Dan Roth and V.G.Vinod Vydiswaran

Date and Time:

Monday, July 15th, 2013 9:00 am - 1:00 pm

Venue :

Room TBA
Hyatt Regency Bellevue, Bellevue, WA

Abstract:

Decision makers and citizens are heavily influenced these days by information they obtain from online resources -- from news portals, online encyclopedias, blogs and forums, product websites and reviews, etc. However, the lack of control over what gets published online can often lead to dissemination of unreliable and misleading information. In such a scenario, how can one verify if a claim is true or determine which sources are trustworthy? In this tutorial, the instructors systematically consider various approaches that address this problem. This tutorial is aimed at AI researchers who are interested in future research on trustworthiness of information. The goal of the tutorial is to present the audience with an exhaustive survey of relevant research on this topic and to outline future directions of research that may interest the AI community.

No specific background knowledge is assumed of the audience. The intended duration of the tutorial is four hours.

Tutorial Outline:

  1. Introduction and motivation [15 min]
    • Real-life examples showing need to verify claims and study information trustworthiness across different domains
  2. Trustworthiness approaches based solely on source features [45 min]
    • Graph based approaches
    • Source reliability using feature-based classification approaches
    • Text-centric approaches to check claims, using community knowledge
    • Using reputation of sources
  3. Fact-finders: Trustworthiness approaches the model both sources and claims [30 min]
    • Voting, "Sums", "Investment", and variants
    • TruthFinder
    • Probabilistic models
  4. Extensions to basic models, adding structure [60 min]
    • Incorporating prior knowledge into models, Generalized fact-finders
    • Incorporating context: Evidence-based Trust Framework
    • Handling source dependence
    • Latent credibility analysis
  5. Presenting trustworthy information [30 min]
    • External factors influence human perception of trust
    • Presenting credible information on controversial issues
  6. Conclusion and future research directions [30 min]

Resources:

Instructors' bio:

Jeff Pasternack is a research scientist at Facebook. His dissertation work with advisor Dan Roth focused on computing the trustworthiness of information via both non-probabilistic models such as fact-finders which iteratively evaluate the believability of claims given the trustworthiness of their sources (and vice versa) and, more recently, generative Latent Credibility Analysis models that, by describing how a source "decides" to assert a claim, provide a principled and highly effective method of assessing trust; making efficient use of prior knowledge has been of substantial interest in both approaches. Earlier work includes metrics for measuring the collective trustworthiness of sets of claims (such as news articles), information extraction, transliteration, and constrained learning.

Dan Roth is a Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign and the Beckman Institute of Advanced Science and Technology, and a University of Illinois Scholar. He is a fellow of AAAI, ACL and the ACM. Roth has published broadly in machine learning, natural language processing, knowledge representation and reasoning and received several paper, teaching and research awards. He has developed several machine learning based natural language processing systems that are widely used in the computational linguistics community and in industry and has presented invited talks and tutorials in several major conferences. He is the Associate Editor in Chief of the Journal of AI Research (JAIR) and will be the Editor in Chief starting in 2015.

V.G.Vinod Vydiswaran is a post-doctoral research associate at the Information Trust Institute at the University of Illinois at Urbana-Champaign. His dissertation research focused on modeling and predicting trustworthiness on online textual information, and was advised by Prof.Dan Roth and Prof.ChengXiang Zhai. As part of his research, he addressed various challenges of computing trustworthiness of online textual information, viz. algorithmic, data-driven, and user-centric. His earlier work has included developing a textual entailment system and applying entailment techniques to relation extraction and information retrieval. His research interests include text informatics, natural language processing, machine learning, and information extraction.