NASA SUITS AR/UX Design

Oct 2020 — May 2021

NASA SUITS Challenge Extended Project

The NASA SUITS design challenge “gives students an authentic engineering design experience supporting NASA’s Artemis mission—landing American astronauts safely on the Moon by 2024” where students are challenged to “design and create spacesuit information displays within augmented reality environments”. With the exception of this year, student teams travel to the NASA’s Johnson Space Center in Houston, Texas, to test their designs. (here is their challenge website)
Previously, the design team presented an exit pitch to NASA in June 2020 for the 2020 NASA SUITS (Spacesuit User Interface Technologies for Students) challenge. This year, our UI/UX team decided to continue iterating and improving the designs while testing and seeking feedback from NASA/space experts.

In support of NASA’s Artemis mission, our team’s objective while designing a MR (mixed reality) informatics display framework was to focus on astronaut autonomy and safety while exploring different methods of displaying information and interaction (voice, hands-free, eye gesturing).
We created end to end hi-fi designs & scenarios for these main functions: camera, documentation, and navigation.

 

Oct 2020 - May 2021 (8 months)
AR/UX/UI Design & Researcher
Hope Mao, Esther Tang, Nigel Lu
Illustrator, XD, Figma, Hololens, Premiere Pro

XR @ Michigan 2021 Summit Student Showcase
UMSI Student Exposition - Spring 2021

Duration
My Role
Team
Tools

Exhibitions


Project Objectives

develop an augmented reality (AR) informatics display system to enhance safety and efficiency on extravehicular activity (EVA) missions.

• Astro Autonomy + Safety through automation

• Best-case interactions: eye hover/tracking, gestures, voice control, 

         head gaze, haptics/vibrations etc → “click” without hands

• Narrowing Scope: Prioritizing camera, documentation, navigation

• Focusing on specific use case scenarios

 

Key User Interactions

We're aware that astronaut's hands are often occupied by tools or other items, and go on long-hour missions in heavy and restrictive suits. We decided our primary method of interaction would be voice. 

Our artificial intelligence VEGA (developed during the NASA SUITS challenge), is our EVA astronaut’s personal assistant, providing them with informational needs during EVAs. Adding on, astronauts are able to utilize eye gaze to simulate a cursor to navigate the interface while using voice to make selections. 

Our secondary interaction is hand gestures. Anything that can be done with voice can also be done by clicking the interface. 

System feedback is given through visual, audio, and/or haptic cues. And finally, fallback would be non-digital methods such as pre-prepared notes on paper.

Primary interactions: Cursor/hovering vs discrete click is mimicked by eye gaze/tracking and voice control 

Secondary interactions: physical (hand, head) gestures

Feedback given through visual, audio and haptic/vibration cues

Fallback given through traditional methods if technology fails 

• Ideal: have the functionality of AR, but without needing to wear actual goggles/headsets. (built into astro helmet)

HoloLens AR display

• Projected HUD

• display/interface is fixed to head movement


 

Personas and Scripting

We created a script to communicate how our designs and interactions might fit within an actual EVA mission. The script is divided into 3 sections to showcase specific design functions. The main characters of our script are Jane and Neil, personas created from interviews with former astronauts, scientists, and geologists.

 

Design Process

 

Below are the 3 main scenarios we created to tell a clear story of the use case and functionality of the system. In these demo videos, interactions done through eye gaze are shown through a white translucent circle (the cursor) and voice commands are followed with visual and audio cues.

Scenario 1

Demo video link
Scenario: Jane and Neil are on a rover, on their way to their mission destination. Neil asks Jane to stop driving and uses the drop waypoint function to mark an interesting spot. He shares it with Jane so she can see the spot he is referring to.

Key Functions:

• Creating waypoint markers
• Shareable waypoint function


Waypoint Markers

  • Different waypoint types allow astronauts to establish their own organizational system

  • Waypoints are dropped on the lunar surface with eye-gaze (aim) and voice command (drop)

 

Scenario 2

Demo video link

Scenario: Jane and Neil decide to collect samples. Jane goes through the process of creating the GeoNote, taking pictures, adjusting camera settings, making voice notes, making annotations on the images, and linking the note to the sample bag they’re storing the sample in.

Key Functions:

• GeoNote creation/modification
• Camera operation with camera settings
• Image annotation
• Sample bag QR code
• Linking GeoNotes with sample bags



Scenario 3

Demo video link

Scenario: Jane and Neil navigate and arrive at the spot they marked in Scenario 1 and Jane opens her documentation to compare sample information. She also takes some notes in her personal notes.

Key Functions:

• View waypoints
• Navigating
• Reviewing previous GeoNote documentation
• Personal notes


Next Steps

Although we are concluding our project, there is much more that can be done in terms of research and design.

More interviewing and user testing with either astronauts or general users would be extremely helpful in improving the interface, finding what feedback type and frequency is most optimal, and in evaluating the flow of interaction.

Testing in controlled environments with devices like HoloLens or an HUD display would be optimal in gaining accurate feedback.

We hope that our designs are helpful in providing inspiration and guidance for future NASA missions. We hope to continue exploring the possibilities in space design