Sesame Street Image Corpus
Sophia Vinci-Booher, assistant professor of educational neuroscience, will create an archive of images from more than 4,500 Sesame Street episodes. The television program has been a staple in the lives of children and their families since 1969, when it began its mission to support early education in under-served communities.
The archive will contain tens of thousands of Sesame Street images that will be annotated to identify the location of items in each scene, including common objects and educational content. With support from the Digital Lab and Heard Libraries staff, the Sesame Street Image Corpus will be stored in a digital archive that will allow efficient image selection across multiple academic disciplines, such as the brain sciences, social sciences, library sciences, education, history, visual arts and filmography.
Image Processing Pipeline Interface
Thomas Scherr, research associate professor of chemistry, will develop a user-friendly web interface for creating image processing pipelines to allow for the analysis of biomedical images.
Biomedical research is increasingly reliant on sophisticated image analysis to unravel complex biological processes and enhance diagnostic accuracy. However, the complexity and need for extensive technical knowledge to operate these tools prevents a large segment of the biomedical community from utilizing advanced image analysis techniques. Scherr’s team will develop an innovative tool that allows users without programming experience from around the world to process large batches of images, automatically determine their results, and record outcomes for further analysis. The team is particularly interested in collaborating with the Digital Lab for its guidance and expertise regarding usability and user-friendly interface and design.
Immersive Multisensory Cave
Marcus Watson, a software developer in the Multisensory Research Lab directed by Mark Wallace, the Louise B. McGavock Professor of Psychology, will study multisensory integration—how the brain combines and integrates acoustic, visual and touch signals.
The Wallace lab is constructing a “cave” environment to study multisensory integration. This includes three large screen walls and a floor that receive video from high-resolution projectors, a 25-speaker array, full motion tracking, robotic force-feedback devices and other technologies
The Wallace lab is constructing a “cave” environment to study multisensory integration. This includes three large screen walls and a floor that receive video from high-resolution projectors, a 25-speaker array, full motion tracking, robotic force-feedback devices and other technologies.
Virtual Reality Stereoscopic Images – Victorian VR
Ole Molvig, assistant professor of history and assistant professor of cinema and media arts, will use machine learning and virtual reality to make 19th- and early 20th-century stereo photography more accessible to 21st-century viewers.
Stereoscopy is a technique, popular during the Victorian era, that creates the illusion of three-dimensional depth from a pair of two-dimensional images. To date, hundreds of thousands of stereo pair images have been scanned and are hosted and downloadable online, including from major archives like the Library of Congress and the New York Public Library. However, viewing them as intended—with stereoscopic depth—requires significant effort. This project is working to have this images displayable inside modern virtual reality headsets, which will vastly increase access to them.
Have a project idea you are interested in cultivating? Please don’t hesitate to reach out and we will be more than happy to work with you to bring it into reality: digital.lab@vanderbilt.edu