
Create a challenge that, if solved, would incentivize the development of runtime-generated, context-aware user interfaces that autonomously adapt to astronaut cognitive states, mission phases, and environmental conditions, enabling seamless human-machine interaction for lunar operations, deep space missions, and eventual Mars exploration.
Focus on defining the problem, not solving it. The solution topic you create will be the focus of student innovation efforts in the next 18 months.

Understanding why this matters helps you see the bigger picture and focus your topic on challenges that align with NASAβs mission.
By 2040, NASA plans to establish a sustained presence on the Moon with up to 144 people living in lunar habitats for 45+ days at a time, preparing for the first human missions to Mars. This vision requires a fundamental shift in how astronauts interact with increasingly complex spacecraft systems, robots, life support infrastructure, and scientific instruments.
The challenge is severe: astronauts on the lunar surface must monitor hundreds of data streams across habitat systems, rovers, experiments, and health devices. Current interfaces, designed months in advance with fixed menu structures, force crew members to search through irrelevant information during critical moments. When a habitat pressure alarm triggers at 3 AM while simultaneously managing a rover malfunction and medical emergency, every second spent clicking through menus could be catastrophic.
Runtime Generative UI represents a shift from designed interfaces to generated experiences. Instead of predetermined screens, AI systems analyze the astronaut's immediate situation (their mental workload, the mission phase, environmental conditions, and task urgency) and assemble custom interface elements in real-time from a library of validated components. A routine systems check might show a simple dashboard, while an emergency automatically surfaces only the controls needed for that specific scenario.
For missions beyond low-Earth orbit, this becomes mission-critical. Mars missions will experience communication delays of up to 20 minutes each way, making real-time ground support impossible. Astronauts must make complex decisions autonomously, supported by intelligent systems that present information tailored to their immediate needs without overwhelming them. NASA's Bedford Workload Scale mandates specific cognitive load ratings, and interfaces that adapt to workload in real-time are essential to meeting these requirements.
Starting with lunar operations provides an ideal testing ground. The Moon offers 2.5-second communication with Earth, allowing ground teams to validate autonomous interface generation while maintaining safety oversight. Success on the Moon will prove these systems can support the complete autonomy required for Mars, where astronauts will live and work for years with minimal Earth support.
This technology enables the scale-up NASA envisions: permanent lunar settlements with rotating crews, autonomous robotic infrastructure, resource processing facilities, and eventually, the first self-sustaining Martian colonies. Each requires interfaces that serve users with different expertise levels (astronauts, scientists, engineers, medical professionals) while adapting to the unique demands of off-world operations.
These six core requirements highlight the foundational capabilities needed today to establish a trajectory toward runtime generative interfaces that will support NASA's 2040 lunar presence and Mars missions, while also creating immediate value in terrestrial high-stakes environments.

Establish standardized libraries of atomic, safety-verified interface components (buttons, data displays, alert mechanisms, input controls) with formal specifications that enable AI systems to programmatically assemble interfaces while guaranteeing functional correctness and meeting accessibility requirements.
Create structured frameworks for representing operational context (task type, environmental conditions, available resources, user expertise, system state) and reasoning about which interface elements are relevant for specific situations, enabling consistent decision-making about what to display.
Develop systematic approaches for verifying that AI-generated interfaces meet safety-critical requirements before deployment, including formal methods for checking component combinations, testing for edge cases, and implementing human-in-the-loop validation for high-risk scenarios.
Develop reliable methodologies for measuring cognitive load, attention, stress, and fatigue through non-invasive sensors and interaction pattern analysis, creating baseline standards that enable interfaces to detect when users are approaching cognitive capacity limits.
Establish technical standards for generating interfaces that function across visual, auditory, haptic, and gestural modalities, with clear fallback hierarchies that enable graceful degradation when specific interaction channels are unavailable or compromised.
Build technical foundations that enable the same interface generation logic to produce appropriate outputs for diverse hardware (tablets, wall displays, wearables, vehicle consoles) and software environments (web, native applications, embedded systems) while maintaining consistent interaction patterns.
Establish standardized libraries of atomic, safety-verified interface components (buttons, data displays, alert mechanisms, input controls) with formal specifications that enable AI systems to programmatically assemble interfaces while guaranteeing functional correctness and meeting accessibility requirements.
Develop reliable methodologies for measuring cognitive load, attention, stress, and fatigue through non-invasive sensors and interaction pattern analysis, creating baseline standards that enable interfaces to detect when users are approaching cognitive capacity limits.
Create structured frameworks for representing operational context (task type, environmental conditions, available resources, user expertise, system state) and reasoning about which interface elements are relevant for specific situations, enabling consistent decision-making about what to display.
Establish technical standards for generating interfaces that function across visual, auditory, haptic, and gestural modalities, with clear fallback hierarchies that enable graceful degradation when specific interaction channels are unavailable or compromised.
Develop systematic approaches for verifying that AI-generated interfaces meet safety-critical requirements before deployment, including formal methods for checking component combinations, testing for edge cases, and implementing human-in-the-loop validation for high-risk scenarios.
Build technical foundations that enable the same interface generation logic to produce appropriate outputs for diverse hardware (tablets, wall displays, wearables, vehicle consoles) and software environments (web, native applications, embedded systems) while maintaining consistent interaction patterns.
How does your topic create meaningful change? The most compelling solution topics bridge the needs of Earth and the demands of space, offering scalable, impactful answers to humanity's biggest challenges. Before diving into feasibility, consider how your topic can shape the world today while paving the way for tomorrow.

Is your topic realistic? Even the most transformative ideas need to be grounded in feasibility. This is about asking the practical questions. Great solution topics are ambitious but achievable within a defined scope.
Can measurable progress be made within 18 months?
Does it rely on existing tools and technology, or those likely available by 2027?
Is your topic specific, focused, and actionable?
Is it practical within budget, manpower, and material constraints?
Can it be scaled for use across regions or contexts?
Does it address a real-world problem with the potential for meaningful impact?
Runtime generative UI technology addresses massive markets with immediate revenue potential while developing capabilities essential for NASA's 2040 lunar presence. These innovations promise safer operations for astronauts and transformative solutions for high-stakes Earth environments.
β