LeeStearns
Senior Professional Staff, Johns Hopkins Applied Physics Laboratory

ABOUT ME

Photo of Lee Stearns

I joined the Johns Hopkins Applied Physics Laboratory as a member of the senior professional staff in September 2018. My work spans a variety of areas related to machine learning and AI, video analytics, wearable sensors, and human-machine teaming.

I completed my PhD in computer science at the University of Maryland, College Park in August 2018. My dissertation was titled "HandSight: A Touch-Based Wearable System to Increase Information Accessibility for People with Visual Impairments". My co-advisors were Professors Jon Froehlich and Rama Chellappa.

I also provide consulting services for software and web development when my schedule permits. I have designed and built utilities to process and visualize medical image data, and mobile and web applications to assist professional and amateur shooters in calculating ballistics while using a client's custom rifle scope.

Download My C.V.

RESEARCH

The goal of my research is to build a wearable device to help people with visual impairments access information through touch. For example, reading and exploring printed text, identifying colors and textures, or gesturing on the surface of the body to control a mobile device. The key component is a small camera that is mounted on the finger, along with other sensors and haptic feedback mounted on the fingers or wrist. My research spans multiple disciplines, applying computer vision and machine learning techiques to process sensor readings, building wearable prototypes and custom circuitry, and designing and testing accessible interfaces through user studies. My complete dissertation is available here.


Reading printed text with a finger-mounted camera

Reading and Exploring Printed Text

HandSight is a wearable system that assists visually impaired users in reading printed text and exploring the spatial layout of a page. It includes a small camera mounted on the finger along with speech feedback and audio and haptic cues to help users explore or read physical documents. The reading experience was inspired by braille, with users able to explore where text and images are positioned on the page through touch, and control the speed of reading as they move their fingers across the page.

Publications: ACVR 2014, TACCESS Nov 2016


Controlling Devices with On-Body Input

TouchCam uses a finger-camera and several other finger- and wrist-worn sensors to allow users to control mobile devices using touch-based gestures on the surface of their own body. Because the sensors are positioned on the gesturing hand rather than on the upper body or target interaction surface, TouchCam is easily scalable and can support input intuitive mappings between locations and applications at a wide variety of locations within the user's reach. For example, users can tap on their wrist to check the time, or swipe on their thigh to interact with a health and fitness app.

Publications: ICPR 2016, ASSETS 2017, IMWUT Dec 2017

The TouchCam prototype

TouchCam Video


Identifying clothing colors and textures

Identifying Colors and Patterns

The finger-mounted camera system can support robust identification of colors and visual patterns through touch-based interactions, mitigating the issues with distance and lighting that affect many existing approaches while enabling interactive explorations to better understand surface appearance. Our preliminary investigation demonstrated feasibility, and suggested that users could train the system to reliably recognize customized articles of clothing in their own closet even with a small number of training examples. Our follow-up work improved scalability and robustness to potentially enable pattern recognition in unfamiliar locations (e.g., while shopping), building a large dataset of online fabric images and using them to train an end-to-end deep neural network classifier.

Publications: ASSETS 2017, ASSETS 2018


Magnifying with Augmented Reality

Augmented reality can magnify and enhance visual information, making the physical world more accessible for people with vision impairments. Unlike traditional CCTV or handheld magnifiers, a wearable AR system is portable and always available, provides a perceptually large display, and can show magnified output that is co-located with or even overlaid on top of the original content within the wearer’s field of view. Our preliminary investigation explored potential design dimensions, and follow-up work built on these designs while evaluating their effectiveness for low vision users. Participants identified several advantages to the concept of head-worn magnification (e.g., portability, privacy, ready availability), and to our AR designs in particular (e.g., a more natural reading experience and the ability to multitask).

Publications: ASSETS 2017, ASSETS 2018

Augumented reality magnification of a magazine document using a smartphone camera

AR Magnification Demo


In the News:

  • Big Ten Network: How Maryland researchers are improving reading for the visually impaired: BTN LiveBIG
  • WUSA9: UMD Researchers Hope to Help the Blind 'Experience the World'
  • TERP Magazine: A New Way With Words: Handy Device to Help Blind Read without Braille
  • New Scientist: Tiny fingertip camera helps blind people read without braille
  • PC Magazine: Fingertip Camera Reads to the Blind
  • Futurism: This New Tech Is Letting Blind People Read Without Braille
  • PSFK: Fingertip Cameras May Help The Blind Read Without Braille

COLLABORATORS

My research group is large and diverse. Here are a few of the professors, students, and other people with whom I have collaborated in my research:

Yumeng Wang
Ji Hyuk Bae
Bridget Cheng
Tony Cheng
Meena Sengottuvelu
Darren Smith
David Ross

MENTORING

I have had the privilege of mentoring several talented undergraduate and high school students as they pursued their group or independent research projects:

Victor DeSouza
Alexander Medeiros
Meena Sengottuvelu
Chuan Chen
Jessica Yin
Harry Vancao
Eric Lancaster