Human-AI Tools for Aligning to Machine Representations

Human-AI Tools for Aligning to Machine Representations

Currently, there is no effective method of meaningfully aligning abstract human conceptions into a representation that can be understood and operated upon reliably by a machine model. We propose a system that facilitates a human-machine feedback loop where designers can understand how their concepts are represented by the machine and use that information to refine their concept for better alignment. Through surfacing confounding context features and suggesting additional context features we help designers construct more precise definitions. Additionally, through simulating how their concepts are represented on different subsets of data we offer a method to help designers better understand their own conceptions of the situation as well as the “vocabulary” of their available data for better alignment. Through user testing, we found that simulation and prompting does help designer’s better align to their available machine representation, while giving them a better awareness of the tradeoffs between precision and recall as a result of refinement. 

Team

Faculty

  • None

Ph.D. Students

  • None

Masters and Undergraduate Students

  • 🎓 Harita Duggirala
  • 🎓 Jiho Kwak