Presentation Slides go here

September 18

Overview
Slides
Homework: By Wednesday Sept 23, (1) Sign-up on doodle for your presentation. (2) Email me your gmail address.

September 25

Human-Robot Interaction
Guest Speaker: Dana Kulic
Human-Robot Interaction: Autonomy, Learning and Effective Collaboration
This lecture will provide an overview of the field of human-robot interaction. We will briefly overview the different types of interaction that are studied, including physical, cognitive and social interaction, and the different application domains, including industrial automation, service robotics and entertainment and education. The second part of the talk will focus on the appropriate level of robot autonomy and its impact on human-robot interaction. While active research in robotics has led to significant advances in robot abilities, robots are still not capable of fully autonomous and effective task execution, especially in difficult and uncontrolled environments. This leads to the need for human teaching and supervision of robot actions. Recent work on approaches for HRI design during shared-autonomy interaction will be discussed.

Required Readings:
** The list of reading is to help you generate project ideas that are well motivated. Your project pitch should reflect your understanding about what has been done in some of these papers. You can pick and choose the paper(s) that are most interesting to you to focus on.
Turtlebot , Nao
Akgun et al. Trajectories and Keyframes for Kinesthetic Teaching: A Human-Robot Interaction Perspective.
Huang et al., Modeling and Controlling Friendliness for An Interactive Museum Robot.
Yanco et al., Analysis of Human-robot Interaction at the DARPA Robotics Challenge Trials.
Chung et al., Accelerating Imitation Learning through Crowdsourcing.

Due Thursday evening by midnight: project pitches (1 slides, PDF format).

Oct 2

Brain-Computer Interfaces
Guest Speaker: Lennart Nacke
Hack your brains! What you can and cannot do with brain-computer interfaces
Your brain is a fascinating and powerful organ at the centre of your nervous system. Being able to capture electrical impulses generated by your brain activity and turn them into actionable data is no longer science fiction. Brain-computer interfaces or BCIs have been popular for years as means of providing paralyzed persons with new interaction possibilities, but they can also work as a simple control mechanism, as a measurement of user state, or for processing neurofeedback of the healthy population. We will discuss the challenges that come with trying to use the slower and less accurate input from BCIs in immersive applications, such as video games. You will takeaway a basic knowledge of the areas in which BCIs have been used successfully, as well as the current applications in video games.

Readings:
NeuroSky, Muse
Nijholt et al., Turning shortcomings into challenges: Brain-computer interfaces for games.
Brain-Computer Interfacing and Games
Human-Computer Interface Issues in Controlling Virtual Reality With Brain-Computer Interface
Bonnet et al., Two Brains, One Game: Design and Evaluation of a Multi-User BCI Video Game Based on Motor Imagery.
Plass-Oude Bos et al., User Experience Evaluation in BCI: Mind the Gap!

Due Thursday evening by midnight: project pitches (1 slide, PDF format).
Changes to Project Pitches: Read This

October 9

Volunteer-Based Crowdsourcing
Guest Speaker: Peter Johnson
A critical perspective on crowdsourcing
This lecture will provide a short overview of volunteered geographic information (VGI), a type of user-generated content with an explicit geographic component. The nature of the term volunteer and different ways that volunteer is realized will be critiqued with reference to OpenStreetMap.org, crisis mapping/humanitarian applications, and government/civic engagement applications. This presentation will challenge you to reconsider who is involved in contributing information, social implication, and the ultimate sustainability of crowdsourcing and citizen science efforts that rely on volunteer labour.

Readings:
Tweeting up a storm: the promise and peril of crisis mapping
Starbird and Palen. (How) Will the Revolution be Retweeted? Information Diffusion and the 2011 Egyptian Uprising
Dynamic Changes in Motivation in Collaborative Citizen-Science Projects
Crowdsourcing on the Spot: Altruistic Use of Public Displays, Feasibility, Performance, and Behaviours
Law and von Ahn. Input-agreement: A New Mechanism for Collecting Data Using Human Computation Games
Brady et al., Investigating the Appropriateness of Social Network Question Asking as a Resource for Blind Users
Heimerl et al. Communitysourcing: Engaging Local Crowds to Perform Expert Work Via Physical Kiosks
Citizen Science (This is not a paper, but gives a nice overview of different types of citizen science projects)
Ipeirotis and Gabrilovich. Quizz: Targeted Crowdsourcing with a Billion (Potential) Users.

Due Thursday evening by midnight: project pitches (1 slide, PDF format).

October 16

Paid Crowdsourcing
Guest Speaker: Kristy Milland (SpamGirl)
Amazon Mechanical Turk: The State of the Turkdom
This talk will start with a quick primer on Amazon Mechanical Turk and crowd work in general, then will lead you through recent changes that have shaken up the Turker community. The importance of why we have to start thinking critically about crowd work platforms and how they are being used will be stressed.

Readings:
Turkit, Boto, Orchestra
Martin et al., Being a Turker.
Gupta et al., Turk-Life in India
Irani and Silbermana. Turkopticon: interrupting worker invisibility in mechanical turk.
Callison-Burch. Crowd-Workers: Aggregating Information Across Turkers to Help Them Find Higher Paying Work
Hanrahan et al. TurkBench: Rendering the Market for Turkers
Retelny et al. Expert Crowdsourcing with Flash Teams
Shepherding the Crowd Yields Better Work

Due Thursday evening by midnight: project pitches (1 slide, PDF format).

October 23

Interactive Machine Learning
Guest Speaker: Burr Settles (Duolingo)
Title: Inter-Active Learning with Queries on Instances and Features
In this talk, I will discuss a few projects aimed at "closing the loop" for interactive natural language annotation and learning. In particular, I describe two systems that combine active and semi-supervised learning by asking humans to label both instance queries (e.g., passages of text) and feature queries (e.g., advice about words or capitalization patterns, and the class labels they imply). Empirical results from real user studies show that these systems are better than state-of-the-art "passive" learning and even instance-only "active" learning, in terms of accuracy given a fixed budget of annotation time. The results are quite replicable and also provide insight into human annotator behavior, suggesting how human factors can and should be taken into account for interactive machine learning.

Reading:
Wekinator
Fails and Olsen. Interactive Machine Learning
B. Settles. Closing the Loop: Fast, Interactive Semi-Supervised Annotation With Queries on Features and Instances
Amershi et al., ReGroup: Interactive Machine Learning for On-Demand Group Creation in Social Networks.
Vijayanarasimhan and Grauman. Large-Scale Live Active Learning: Training Object Detectors with Crawled Data and Crowds.
Ambati et al. Active Learning and Crowdsorucing for Machine Translation
Power to the People: The Role of Humans in Interactive Machine Learning
Fiebrink et al. Human Model Evaluation in Interactive Supervised Learning
Attenberg et al. Beat the Machine: Challenging Workers to Find the Unknown Unknowns

Due Thursday evening by midnight: project pitches (1 slide, PDF format).

October 30

Introduction to HCI + Rapid Prototyping Session

Due Thursday evening by midnight: project proposal, 1-2 pages, in SIGCHI format. Please put your project proposal into google drive in PDF format. The proposal should include (a) problem statement, (b) system description, (c) a plan for evaluating the system performance. Your description for should address the following questions: What will you consider when designing the interfaces to elicit human input/feedback? Can you think of multiple interfaces to elicit the same input/feedback? Which interface would you expect to work better and why? How would you quantify the performance of different interfaces? Which type of research contributions is your project expected to produce?

Nov 6

Presentations
Presenters: Luyu Wang, Hengzhi Zhang, Tim Tse, William Callaghan, Alex Williams

Nov 13

Presentations
Presenters: Pavel Valov, Haotian Zhang, Xueca Cai, Kristen Morgan, Sajin Sasy

Nov 20

Presentations
Presenters: Anson Ho, Laton Vermette, Irish Medina, Dan Wang, Hird Roshandel

Due: 6-page project paper draft (introduction, related work, system description, experimental design)

Nov 27

Presentations
Presenters: Wanqi Li, Maithri Hegde, Edmund Liu, Junnan Chen

Dec 4

Project Poster/Demo Session

Due Friday Dec 11: 10-page project paper