Projects – Spring 2026

Click on a project to read its description.

Sponsor Background

Aspida is a tech-driven, agile insurance carrier based in Research Triangle Park. We offer fast, simple, and secure retirement services and annuity products for effective retirement and wealth management. More than that, we’re in the business of protecting dreams; those of our partners, our producers, and especially our clients.

Background and Problem Statement

When developers submit pull requests, it is often unclear which platforms, features, or business areas can be impacted by the changes. This lack of visibility slows down QA planning, increases regression risk, and makes it difficult to prioritize testing effectively. Manual tagging and tribal knowledge are not scalable solutions. A tool that leverages machine learning to analyze code changes and provide actionable insights into impacted areas would significantly improve release confidence and streamline the testing process.

Project Description

Aspida proposes building a tool that automatically analyzes PRs, clusters files by functional areas, and predicts impacted features or components. The impact analysis should consider both structural and semantic relationships within the codebase to provide accurate predictions. The system should:

  1. Ingest PR Data
  2. Cluster the Repository
  3. Map Changes in the PR to Clusters
  4. Generate an Impact Report
  5. Stretch Goals:
    • Develop a dashboard to visualize clusters and track PR impacts.
    • Integrate with test management tools to auto-trigger relevant regression tests.

Technologies and Other Constraints

  • LLM – Claude via AWS Bedrock (preferred)
  • Cloud Environment – AWS (preferred, but local execution possible)
  • UI (if needed) – React
  • Backend – Node.js with TypeScript
Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

Bandwidth is a software company focused on communications. Bandwidth’s platform is behind many of the communications you interact with every day. Calling mom on the way into work? Hopping on a conference call with your team from the beach? Booking a hair appointment via text? Our APIs, built on top of our nationwide network, make it easy for our innovative customers to serve up the technology that powers your life.

Background and Problem Statement

Any organization working with personal or confidential data requires tools that can remove sensitive information safely and accurately. Manual redaction processes are difficult to scale and can lead to errors. Bandwidth has an opportunity to provide automated, privacy-first tooling that aligns with our trust and compliance commitments.

Project Description

The AI-Redaction Service is a tool designed to automatically detect and remove sensitive information—such as phone numbers, emails, dates(ex. DOB), credit card numbers, and other Personally Identifiable Information (PII)—from call transcripts or audio. It enhances privacy and compliance for customers using Bandwidth’s call recording and transcription features. Students will build a text-based redaction MVP, with optional audio enhancements as stretch goals.

Objectives

  • Automatically detect and redact common PII in transcripts.
  • Provide structured output summarizing detected entities.
  • Build a simple, intuitive interface or API for redaction workflows.
  • Deliver a working demonstration suitable for internal and customer use cases.

Core Features (MVP)

  • Upload or paste transcript → receive redacted version.
  • Detect PII categories including: phone numbers, emails, credit card numbers, account numbers, timestamps, and other structured entities.
  • Replace sensitive elements with standardized tokens (e.g., [REDACTED_PHONE]).
  • JSON summary of detected items.
  • No customer data required; supports synthetic transcripts.

Success Criteria

  • High accuracy for common structured PII.
  • Maintains readability of redacted transcripts.
  • Low false positives and minimal missed detections.
  • Demonstrates value for Bandwidth customers and internal teams.
  • Clear path for productization or integration into existing tools.

Stretch Features  

  • Named Entity Recognition (NER) model for names, addresses, and contextual PII.
  • Audio redaction (mute or bleep sensitive portions).
  • Integration with Bandwidth Recording or Transcription API.
  • Real-time or streaming redaction.
  • Customizable redaction rules.

Technologies and Other Constraints

Technical Approach

  • Backend API for detection, redaction, and output formatting.
  • Two-layer pipeline:
    • Regex + rule-based detection for structured PII.
    • ML/NER model for contextual PII (optional).
  • Simple web UI for upload and visualization (React, Streamlit, or HTML/JS).
  • Synthetic test data for evaluation.
  • Clear explainability and auditing of detected entities.

Expected Deliverables

  • Functional text-based redaction service (API or UI).
  • Documentation of detection logic and patterns.
  • Sample transcripts and synthetic test suite.
  • Demo of redaction workflow.
  • GitHub repository with setup and usage instructions.
  • Optional: audio redaction and/or enhanced NER layer.
Students will be required to sign over IP to sponsors when the team is formed

Sponsor Information & Background

The Computer Science department at NC State teaches software development using a “real tools” approach. Students use real-world industry tools, such as GitHub and Jenkins, to learn about software development concepts like version control and continuous integration. By using real tools, students are better prepared for careers in computing fields that use these tools.

Problem

Google Docs maintains a history of all revisions made to a given document. Several browser plugins exist that allow a user to replay the changes made to a document. These browser plugins allow users to jump backward, forward, or ‘replay’ an animation of the revision history at custom speeds. The plugins also provide analytics, such as displaying a calendar view that identifies which days revisions were made.

GitHub provides a similar experience as Google Docs where a revision history is maintained to track changes to a software codebase. However, it’s often difficult to understand how a code file has changed over time, especially when working with team assignments when there may be hundreds of commits. GitHub provides an interface to manually inspect commit histories, but no interface exists to view a ‘live replay’ of the history of a given file. 

Many core computer science courses are considered ‘large’ courses, often with over 200 students enrolled each semester. A tool that allows replaying a full history of changes to a file in a repository over time could help students better understand their own coding approaches (such as “what have I tried already?” or “what work has my team member contributed since our last meeting”). The tool could also help teaching team members understand how to better help students debug logic errors since they could quickly replay a history of edits to a file to know what a student has already attempted.

Solution

The user should indicate which GitHub repository and branch they wish to inspect. The user should also be able to select a specific file for which to replay commit history. The tool should allow users to perform actions such as  ‘rewind’, ‘fast forward’, ‘pause’ or ‘play’ while presenting a visual replay of all changes made to the given file. Each change should be clearly highlighted and annotated with information such as the name of the person who committed the code. A visual timeline should also be presented to show a history of commits. Within this timeline, the tool should clearly indicate how many lines of code were added/removed from one commit to the next. Additional features may be identified during the course of the project.

Technologies

The commit history replay tool should be designed and implemented as a web application. The backend should be implemented in either JavaScript/TypeScript or Java as a REST API. The frontend will require the use of visualization libraries, such as D3.js and D3-derived libraries.Using a frontend framework for the frontend such as React is acceptable. The number of dependencies should be limited to those really needed to simplify maintenance of the application.

Sponsor Background

Dr. Tian is a research scientist who works with children on designing learning environments to support artificial intelligence learning in K-12 classrooms.

Dr. Tiffany Barnes is a distinguished professor of computer science at NC State University. Dr. Barnes conducts computer science education research and uses AI and data-driven insights to develop tools that assist learners’ skill and knowledge acquisition.

Background and Problem Statement

Elementary students rarely have opportunities to learn computational thinking, data literacy, and AI concepts through creative, meaningful activities. Traditional instruction often separates storytelling from STEM learning, even though upper-elementary students naturally reason about cause and effect, choices, and narrative structure. Schools and libraries need developmentally appropriate tools that let children learn these skills through playful exploration—not through direct instruction alone.

This project addresses this gap by using AI-supported computational storytelling. The challenge for the student team is to design and develop components of a web-based platform where children (grades 3–5) can create branching stories, test the logic behind their narratives, analyze reader data, and receive age-appropriate AI scaffolding. The motivation is to broaden access to computational thinking and AI literacy skills by embedding them into an engaging creative medium that teachers and librarians can use in classrooms, camps, and community programs.

Project Description

We envision a modular, child-friendly web platform (see Figure 1 for user flow overview) that guides students through story creation, story logic design, and data-driven story analysis, with optional AI assistants providing hints and reflective prompts.

Figure 1 Storytelling Platform Overview

We envision the platform contains the following components: 

  1. Story Creation Tools
    A visual editor where students design characters, events, and branching choices. Students can assign probabilities to different story paths and visualize their narrative as a decision tree.
  2. Story Player
    An interactive reader mode that lets peers “play” through the story. Each choice generates data about which paths are selected and how often various endings occur.
  3. Story Database & Interaction Data Logging
    A shared gallery where students can publish stories, explore others’ work, and remix stories as new starting points. All user interaction data with the platform will be logged for research data analysis later. 
  4. Story Analysis & Data Dashboard:
    Child-friendly visualizations (bar charts, heatmaps, path diagrams) showing which choices were popular, which branches failed or succeeded, and how changing probabilities might affect outcomes.
  5. AI Assistants
    LLM-based helpers that can suggest improvements (e.g., “Do you want to clarify this description?”), flag logical inconsistencies, or help students interpret analytics data.

Because of the scope of senior design projects, we encourage the student teams to choose one to three modules from 1-4 as a semester project. 

Use Case Example

Lina, a fourth grader, logs into the AI-in-the-Loop platform for the first time to create a story. She wants to create a story about a clever fox named Mira. Lina clicks on the character card to create Mira and types in Mira’s strengths: smart and brave (see Figure 1 illustrating some of these choices). The AI prompts her to consider weaknesses: “What challenges might Mira face? Adding a weakness can make your story more interesting.” Lina adds: “Mira can’t resist exploring dangerous places.” With that, Mira feels like a real character, and Lina is eager to see what adventures she might have. Lina starts creating an event card: “Mira finds a dark cave in the forest.” Below the card, the platform prompts her to add choices. Lina clicks the “+” button two times and types: 1) explore the cave or 2) run away. Each choice automatically generates a new blank Event Card, which Lina clicks to add descriptions and small illustrations. The system prompts Lina to assign probabilities to each path, and she initially makes them to be equal: explore (50%) and run away (50%). The AI gently suggests: “Since Mira is brave and clever, should the chance of exploring the cave be higher?” Lina thinks for a moment and decides to raise it to 70%. Once Lina has filled in all next-event cards, she clicks “Play Your Story”. She clicks through each choice, watching the story unfold dynamically. Some outcomes succeed, others fail, and the AI gently prompts reflection: “Why do you think your hero lost in this path? Could a different attribute have helped?” Lina adjusts a choice, testing how Mira’s bravery changes the outcome.

Once Lina finishes the story, she publishes her story to the story database on the platform. Over the next week, her peers play her story and some even choose to remix it to extend the story beyond her original narrative, and the platform collects reader data: which paths were chosen most, which failed, and which led to surprise endings. Lina later opens her Data Dashboard to examine the story outcomes, sees that most readers avoided running away, and notices that her “explore” choice led to many successful outcomes. 

Benefit to End Users

Children explore computational logic and data literacy through creative storytelling. Educators gain a standards-aligned tool for integrating STEM skills into authentic narrative tasks.

Technologies and Other Constraints

Students can begin by exploring Twine (twinery.org) to understand branching story representation, node-based narrative structure, and basic functionality. This platform will not use Twine directly, but can take some inspiration from Twine’s editor and player workflow.

Suggested Technologies:

  • Front-end framework: React, Vue, Svelte, etc
  • Backend: Node.js, Python/FastAPI, etc. 
  • Database: PostgreSQL, MongoDB, or similar to store stories, branches, analytics data.
  • Visualization library: D3.js, Chart.js, Plotly for the Data Dashboard.
  • LLM integration: Llama, local APIs

Preferred Paradigm:

  • Web-based solution; mobile-responsive is helpful but not required.
  • For AI components, API-based design is preferred to allow modular integration.

Constraints & Considerations:

  • Use synthetic data for testing.
  • If using external LLM APIs, teams must follow usage policies and rate limits.
  • All code must be original (no copyrighted code beyond permissible open-source licenses).
  • No special hardware required.

Sponsor Background

Nathalie Lavoine is an Associate Professor in Renewable (Nano)materials Science and Engineering. As part of her research and teaching initiatives, she focuses on developing sustainable packaging from plant-derived resources to replace petroleum-derived products, lower the environmental footprint of packaging and increase the shelf-life and safety of food products.

By training, Dr. Lavoine is a Packaging Engineer. As an instructor at NC State, she shares her passion through the offering of one annual undergraduate level course on fiber-based packaging and regular guest lectures on this topic.

A common misperception reduces packaging to ‘just a box’. The reality is that sustainable containers are the product of a highly complex, multidisciplinary orchestration. Their development requires the integration of materials science, environmental ethics, mechanical engineering, automated logistics, and beyond. This project is driven by the need to dismantle this reductive view, engaging students, faculty and the public to recognize the technical depth, rigorous labor, and ethical considerations embedded in the materials we use every day.

Background and Problem Statement

The field of Sustainable Packaging is a multidisciplinary science involving complex material properties, historical context, and intricate lifecycle data. Traditional educational methods, primarily static lectures and summative paper-based assessments, often struggle to engage students with the highly technical and data-driven nature of the subject. There is a "visualization gap": students can memorize facts about recycling rates or polymer barriers, but they lack an interactive way to see how these elements combine to create a "complete" sustainable solution.

The motivation for this project is to bridge this gap through gamification. By transforming a dense database of packaging science into a competitive, "Kahoot-style" trivia experience, we can increase knowledge retention and provide students with a sense of progress. The project addresses the need for a modern, interactive tool that can be used both for individual study and as a real-time classroom engagement platform.

Project Description

The proposed solution is "The Sustainable Box," a full-stack web application. The core mechanic is a trivia game where players answer questions across five color-coded categories (such as History, Technical, 3 Pillars, Ethics, and End-of-Life).

Some examples of key features for this game:

  • Instead of traditional Trivia pie, the app will feature a 3D-rendered box. As a student masters a category, the corresponding face of the box fills with color. This provides a tangible, visual representation of building sustainable knowledge.
  • This game should offer a multi-mode gameplay: (1) a "Teacher Mode" allowing an instructor to host a live game. Students join via a room code on their own devices to compete in real-time. (2) a self-paced mode for individual review (single player) and (3) a multi-player mode for social activity.
  • Because the primary use of this game will be for teachers, a teacher’s dashboard would be important. Instructors should gain access to an analytics suite that tracks class performance. This would allow them to identify knowledge silos – meaning specific categories where the whole class (or a higher number of students) might be struggling (e.g., if the "Blue/Technical" face remains empty for most students).
  • We could also think of an administrative backend where an LLM can be used to generate/verify question sets based on specific source materials to ensure the database remains fresh and accurate. To be discussed (we would need to ensure and confirm the generated sources and questions are correct.)

Other features can be developed and included. This list is not exhaustive, and I would appreciate student input as they will be the primary audience. We could also think of different levels of difficulties.

Another aspect of the project will be the generation and collection of questions & answers. I may not have enough time and bandwidth to find all the different questions per category (I believe a classic Trivia game relies on 30-50 questions per category). Hence, the student team would be expected to do some additional literature research and to dive into this topic.

Examples of category includes: 1- history & evolution (ancient materials, industrial revolution, mid-century rise of plastics), 2- technical aspects/engineering (polymers, barrier properties, manufacturing process, structural integrity), 3- sustainability (environmental impact, social responsibility, economic viability – global vision), 4- consumerisms & ethics (marketing psychology, regulations, labeling, user experience), and 5- end-of-life and data (recycling rates, biodegradation vs composting, LCA, waste management systems).

Technologies and Other Constraints

  • A Web-based application is required to ensure accessibility across different devices (laptops, tablets, and smartphones) without requiring an app store download.
  • Students should be able to access the game easily, but the game should not – at this stage – be open access. At the beginning though, access could be restricted to students with a specific class code, or ncsu.edu address to manage server load and data privacy.
  • A real-time communication framework may be necessary for the classroom mode. 
  • A database will be needed to manage a library of 150–250 questions and student leaderboards.
  • Potential constraints: 
    • The system must be able to handle at least 20–30 (at first) concurrent users in a single "Live Session" without significant latency.
    • Students must implement a way to flag or verify LLM-generated questions to ensure scientific accuracy.
    • The final product should be developed with an Open Source mindset, though the specific packaging science database may remain proprietary to NC State – to be confirmed depending on the nature of the sources.

Sponsor Background

LexisNexis Legal & Professional, an information and analytics company, states its mission as: to advance the rule of law around the world. This involves combining information, analytics, and technology to empower customers to achieve better outcomes, make more informed decisions, and gain crucial insights. They strive to foster a more just world through their people, technology, and expertise for both their customers and the broader communities they serve.

Background and Problem Statement

In its continuing mission to support the Rule of Law, LexisNexis has around 3,000 people working on over 200 projects per year developing software for its products.

In a rapidly changing environment, as opportunities arise and priorities shift, the question often asked (and rarely answered with confidence) is “What is the consequence of moving someone from one project to another?”

LexisNexis needs an intuitive tool to manage and track the association of people with projects, to provide the insight and data necessary to support business priority decisions. 

Project Description

LexisNexis is looking for a simple, intuitive application that will allow the management of resource allocation to teams and projects.

The tool will be used by Software Development Leaders to group their people into teams and to associate them to projects for a given duration.

It should help Leaders readily determine issues and opportunities with their teams and projects, and take actions accordingly.

The data collected will support decision making when considering the resourcing of projects when competing priorities need to be considered.

It should ultimately also support financial tracking and planning.

Technologies and Other Constraints

This project extends a foundation laid by a previous NSCU Senior Design team.

The team is at liberty to determine with elements of the previous team’s work they wish to retain and which they feel would benefit from rework/reimplementation.

The preferred solution would be an application accessible though Microsoft Teams, LexisNexis’ collaboration tool of choice.

LexisNexis is best placed to support development in C#, .Net, Angular and SQL Server, although the team may consider other technologies if appropriate.

The initial source of data will be CSV/Excel spreadsheets. 

Organizational data will ultimately be sourced through Active Directory, available through Microsoft Graph API.

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

ShareFile is a leading provider of secure file sharing, storage, and collaboration solutions for businesses of all sizes. Founded in 2005 and acquired by Progress Software in 2024, ShareFile has grown to become a trusted name in the realm of enterprise data management. The platform is designed to streamline workflows, enhance productivity, and ensure the security of sensitive information, catering to a diverse range of industries, including finance, healthcare, legal, and manufacturing.

Background and Problem Statement

In today’s business environment, every organization—whether a startup, small business, or enterprise launching a new product—must establish a strong and recognizable brand identity. A brand identity includes visual elements such as logos, color palettes, and typography, as well as messaging components like taglines and brand voice. These elements shape how customers perceive a business and significantly influence trust, credibility, and market differentiation.

Historically, creating a cohesive brand identity required hiring branding agencies or freelance designers. These services often cost thousands to tens of thousands of dollars and can take weeks or months to complete. For many small businesses and entrepreneurs, this level of investment is unrealistic. As a result, they frequently launch with inconsistent, unprofessional, or generic branding that limits their competitiveness.

DIY design tools have attempted to democratize branding, but they still require substantial creative skill and strategic understanding. Users must know color theory, design principles, typography, and marketing psychology to produce effective brand assets. Even when tools provide templates or basic logo generators, the results often lack originality and fail to communicate a unique brand story.

Another major challenge is brand cohesion. Ensuring that a logo works with a color palette, that a tagline aligns with the visual identity, and that all elements communicate the right message requires expertise most users do not possess. Existing tools treat each asset separately, leaving users to manually piece together a brand system.

The current landscape has reduced the barrier to creating assets, but not to creating high-quality, cohesive, professional brand identities. We want to further reduce the time, cost, and expertise required by applying generative AI technology. By leveraging AI, a brand identity platform could generate complete, harmonized brand systems based on simple user prompts and intelligent design principles.

Project Description

The goal of this project is to create an AI-powered Brand Identity Studio that enables users to generate comprehensive, cohesive brand assets—including logos, color palettes, and taglines—through natural language prompts. The system will combine multimodal generative AI (text-to-image, text-to-text, and algorithmic color generation) with a backend platform for managing brand projects, asset versions, and analytics.

There are two primary user personas:

Brand Creators

These users initiate brand projects by providing prompts such as “Create a modern, minimalist brand identity for a sustainable skincare company.” They can iterate on AI-generated assets, mix and match components, and export production-ready files.

Brand Reviewers / Stakeholders

These users view generated assets, provide feedback, and help evaluate which brand directions best fit the business goals.

The platform will ensure that all generated assets work harmoniously—colors are accessible and complementary, logos scale effectively, and taglines align with the intended brand voice and target market.

Requirements

1. Brand Asset Generation

AI-Generated Brand Systems

Use generative AI models to produce:

  • Logos (text-to-image)
  • Taglines and brand messaging (text-to-text)
  • Color palettes (algorithmic color theory engine)

Prompt-Based Generation

Users describe their business, audience, and style preferences. The AI generates:

  • Multiple brand directions
  • Cohesive asset sets
  • Variations for refinement

Asset Types

  • Logos (primary, secondary)
  • Color palettes (primary, secondary, accessible variants)
  • Taglines and messaging snippets
  • Templates & Collateral (Presentation templates, business cards)
  • Typography recommendations (stretch)

Customization Tools (Stretch)

  • Edit color palettes
  • Regenerate logos with constraints
  • Adjust tagline tone or style

Preview Mode (Stretch)

Show assets applied to mockups, such as:

  • Business cards
  • Social media banners
  • Product packaging

2. User Interface (Frontend)

Brand Workspace

An interface where users:

  • View generated assets
  • Compare variations
  • Save or discard versions

Asset Renderer

Display logos, palettes, and taglines in a cohesive layout.

Export Tools

Allow users to download:

  • PNG, SVG, JPG logo files
  • Color palette files (ASE, JSON)
  • Tagline text files

3. Backend

Flexible Schema

Support multiple asset types and versions per project.

CRUD APIs

  • Create: New brand projects and asset generations
  • Read: Retrieve brand assets, versions, and analytics
  • Update: Modify or regenerate assets
  • Delete: Remove projects or versions

Asset Validation

Ensure generated assets meet accessibility and quality standards (e.g., color contrast).

4. Integration with AI Services

AI Model Integration

Use external or custom-trained models for:

  • Logo generation (image models)
  • Tagline generation (LLMs)
  • Color palette generation (algorithmic + AI refinement)
  • RAG

Natural Language Input

Users describe their brand vision in plain language.

5. Data Management

Storage

Store:

  • Brand metadata
  • Asset files
  • Version history
  • User feedback

Data Export (Stretch)

Allow export of brand project data for external analysis.

6. Analytics and Insights (Stretch)

Brand Performance Dashboard

Track:

  • Which generated assets users prefer
  • Which styles perform best across industries
  • Engagement metrics (views, downloads)

Industry Benchmarking

Compare generated brand styles to common patterns in similar markets.

Technologies and Other Constraints

Frontend: React is suggested as a front-end framework.

Backend: Python is suggested for any inference or semantic RAG back-end. 

Cloud Providers: If cloud providers are used, AWS is preferred.

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

The Undergraduate Curriculum Committee (UGCC) reviews courses (both new and modified), curriculum, and curricular policy for the Department of Computer Science.

Background and Problem Statement

North Carolina State University policies require specific content for course syllabi to help ensure consistent, clear communication of course information to students. However, creating or revising a course syllabus to meet updated university policies can be tedious, and instructors may often miss small updates to mandatory texts the university may require in a course syllabus. There is additional tediousness in updating a course’s schedule each semester.  In addition, the UGCC must review and approve course syllabi as part of the process for course actions and reviewing newly proposed special topics courses. Providing feedback or resources to guide syllabus updates can be time-consuming and repetitive, especially when multiple syllabi require the same feedback and updates to meet university policies.

Project Description

The UGCC would like a web application to facilitate the creation, revision, and feedback process for course syllabi for computer science courses at NCSU. An existing web application enables access to syllabi for users from different roles, including UGCC members, UGCC Chair, and course instructors (where UGCC members can also be instructors of courses). The UGCC members are able to add/update/reorder/remove required sections for a course syllabus, based on the university checklist for undergraduate course syllabi. Instructors are able to use the application to create a new course syllabus, or revise/create a new version of an existing course syllabus each semester.  The tool provides the functionality for review comments and resolution, important dates (e.g., holidays and wellness days), and the inclusion of schedule information.   

We are building on an existing system.  The focus this semester will support template item editing and propagation, syllabus duplication with schedule updates, and front-end refactoring and automated testing.

New features include:

  • Propagate template edits to syllabi based on the current template version.  The propagation should be automatic for any non-editable blocks.  There will need to be a process for conflict management that could be handled programmatically or with user intervention.
  • Full implementation of syllabus duplication that will copy a syllabus for the next semester.  This should include updating the course’s schedule to the new semester’s dates and avoiding important semester dates where classes are not held.  A robust solution (as a stretch goal) would handle a class that is moving days (e.g., M/W to T/H or 2 days per week to 1 day per week).

Process improvements include:

  • Refactoring front-end and adding automated front-end testing
  • GitHub actions that will run tests and report statement coverage
  • Scripts to support VM deployment

Technologies and Other Constraints

  • Technologies are based on extending the existing codebase, which uses:
    • Docker
    • Java (for the backend)
    • PostgreSQL
    • JavaScript (for the frontend) 
  • The software must be accessible and usable from a web browser

Sponsor Background 

Blue Cross and Blue Shield of North Carolina (Blue Cross NC) is the largest health insurer in the state,  serving ~5 million members with ~5,000 employees across Durham and Winston‑Salem. Since 1933,  Blue Cross NC has focused on making healthcare better, simpler, and more affordable, and on  tackling the state’s most pressing health challenges to drive better outcomes for North Carolinians.  

Background and Problem Statement 

Employer HR/Benefits administrators need a straightforward way to perform member maintenance for group insurance: adding subscribers and dependents, terminating members, updating  demographics and contact information, and handling effective dating (including retroactive  changes). Current processes are often fragmented across multiple tools and require manual  interpretation of complex policy/business rules.  

Project Description 

Students will build an end‑to‑end system—front to back—that demonstrates a clean user experience  on the front end with server-side rendering and Micro UIs. They will include a lightweight AI policy  assistant that provides real-time, non-binding guidance during HR workflows. Users can chat to  describe their task, and the assistant surfaces relevant policy impacts and prompts for confirmation  on downstream effects.

Technologies and Other Constraints 

The project uses Vue with Nuxt 3 (SSR) for the frontend, adopting a micro‑frontend architecture via  Vite Module Federation. The backend runs on Java with Quarkus, exposes GraphQL APIs, and  supports any relational database (PostgreSQL recommended). Constraints include synthetic/mock  data only, deployment preferably on OpenShift/Kubernetes, and stretch goals for observability  (OpenTelemetry + Tempo),TLS encryption, SSO with Ping OAuth.

Sponsor Background

The Campus Writing & Speaking Program (CWSP), within the Office for Faculty Excellence (OFE), supports faculty in embedding oral, written, and digital communication across the curriculum. Co-directed by Dr. Kirsti Cole and Dr. Roy Schwartzman, with Senior Strategic Advisor Dr. Chris Anson, CWSP leads the Writing & Speaking Enriched Curriculum (WSEC) and ACI initiatives. Its work has helped position NC State as a top US university for writing-in-the-disciplines.

Background and Problem Statement

Phase I delivers the CWSP ACI Certificate Companion, a unified system for faculty to track progress, store artifacts, and submit capstones.

Phase II expands this foundation by creating a Student Partner Program that connects undergraduates with ACI faculty projects, while also layering in advanced innovation features.

Key possibilities include:

  • Student Engagement: Students join faculty certificate projects, contribute artifacts, and log reflections for recognition (badges, micro-credentials).
  • Enhanced Feedback: Faculty, mentors, and students interact through threaded feedback, rubrics, and collaborative comments.
  • Knowledge Commons: The resource repository expands into a faculty-student knowledge base of assignments, prompts, rubrics, and sample projects.
  • Digital Communication Innovation: The platform pilots a home-grown bot to guide faculty and students in using LLMs effectively and ethically. This bot could operate in multiple modes:
    • “Digital Hand” – a step-by-step scaffolder for drafting and revision.
    • Game-based tutorial mode – scenario-driven experiments with AI outputs.
    • Reflective companion – prompting users to interrogate ethics, accessibility, and rhetorical appropriateness.

Phase II is visionary but realistic: it leverages the Phase I system, introduces a student role, and layers in future-facing features that distinguish NC State as a leader in communication innovation.

Project Description

Vision. Extend the platform into a multi-role ecosystem where faculty and students collaborate, supported by integrated resources and innovative digital tools.

MVP Features

  • Student Partner role and opportunity board.
    Faculty can post structured opportunities (e.g., syllabus pilot support, oral workshop assistance, digital portfolio testing). Students browse, sign up, and get matched to projects.
  • Matching and approval workflow.
    Mentors/admins oversee a lightweight matching process to ensure projects are appropriate and workloads are balanced. Students see a clear status update (applied, approved, active).
  • Student reflection and artifact logging.
    Students log contributions and upload artifacts alongside guided reflection prompts (mirroring Phase I’s faculty reflection prompts), reinforcing the habit of metacognition in communication practice.
  • Faculty dashboards with student integration.
    Dashboards show both faculty progress and student contributions, enabling faculty to see how their partners are engaging with their project.
  • Resource repository expansion.
    The repository becomes a shared knowledge commons, where both faculty and students can upload, tag, and reuse assignments, rubrics, and digital tools. Resources are filterable by role and certificate track.

Stretch Goals

  • Student achievement badges and certificates.
    Undergraduate partners earn digital micro-credentials (badges or certificates) for completing engagements, aligning their contributions with NC State’s broader recognition of experiential learning.
  • Peer-to-peer feedback with threaded discussions.
    Students and faculty can comment on each other’s artifacts in discussion-style threads, introducing collaborative critique that mirrors writing center and studio practices.
  • Integrated showcase gallery.
    Builds on the Phase I micro-showcase by combining faculty capstones and student projects into a browsable gallery, with search and tagging by certificate track, communication mode, or skill.
  • Interactive digital communication bot (pilot).
    A home-grown bot scaffolds ethical and effective LLM use in digital communication. Possible modes include:
    • Digital Hand: guided, step-by-step support for drafting and revising.
    • Game-based Tutorial: scenario-driven activities where users test prompting strategies.
    • Reflective Companion: prompts that nudge users to evaluate tone, ethics, accessibility, and rhetorical choices.
  • Analytics dashboards.
    Visual dashboards summarize program outcomes: faculty completion rates, student engagement hours, resource usage, and patterns of feedback. These dashboards help CWSP assess impact and demonstrate institutional value.

Fit for Senior Design. Requiring scalability, permissions logic, and user-experience innovation. Provides opportunities for backend, frontend, AI integration, and UX design.

Timeline and scope for the Spring 2026 semester to be determined with sponsors.

Technologies and Constraints

  • React/Next.js; Node/Express or FastAPI; PostgreSQL.
  • Must ensure backward compatibility with Phase I features.
  • AI/bot prototype can be sandboxed using open APIs, but should remain lightweight, ethical, and accessible.
  • Scalable design for future expansion (graduate mentors, cross-college programs).

Sponsor Background

Jenn Woodhull-Smith is a lecturer in the Poole College of Management and has developed open source textbooks for several entrepreneurship courses on campus. The creation of open source micro-simulations based on the course content will not only enhance student learning in and outside of the classroom, but also provide a cost effective learning tool for faculty and students.

Background and Problem Statement

Currently, there is a significant lack of freely accessible simulations that effectively boost student engagement and enrich learning outcomes within educational settings. Many existing simulations are typically bundled with expensive textbooks or necessitate additional purchases. An absence of interactive simulations in an Entrepreneurship course diminishes student engagement, limits practical skill development, and provides a more passive learning experience focused on theory rather than real-world application. This can reduce motivation and readiness for entrepreneurial challenges post-graduation.

Our primary goal is to develop an open source simulation platform that initially supports the MIE 310 Introduction to Entrepreneurship course, but could be later made accessible to all faculty members at NC State and eventually across diverse educational institutions.

Project Description

The envisioned software tool is a versatile open source tool designed to create visual novel-like mini-simulations with content and questions related to a certain course objective. The intent is to empower educators to be able to create their own simulations on a variety of different topics. Faculty will be able to develop interactive learning modules tailored to their teaching needs. This tool needs to be able to export grades, data, and other relevant information based on the following  requirements:

  • Introduce the scenario in a welcome screen, including text and images
  • Have an end of simulation screen that displays the users’ grade 
  • Allow for multiple simulations to be run back to back
  • Export grades and user to a database to be used by Moodle to grade the student
  • Update system design
  • Create a website for students to access the simulation by logging in with their NCSU credentials

Technologies and Other Constraints

Suggestions for the Spring 2026 team are the following:

  • Performance and Technical Debt:
    • Sequential data fetches are blocking, allow them to run concurrently
    • Project structure should be refactored to clear out legacy and unused code
  • Prepare for a production environment:
    • Ensure all error handling is up to par
    • Setup monitoring
    • Test for accessibility 
  • Completely Open-Source the Project for other Universities to use
  • Connect LTI with Canvas

Technologies used in prior semesters include:

  • Frontend: React, Tailwind CSS, docker, TypeScript
  • Backend: NextAuth.js, TypeScript, docker, Google Cloud, Moodle, Google Classroom, mongo DB

Sponsor Background

Decidio uses Cognitive Science and AI to help people make better decisions and to feel better about the  decisions they make. We plan to strategically launch Decidio into a small network fanbase, then grow it  deliberately, methodically and through analytics into a strong and accelerating network. 

Background and Problem Statement

Consumer platforms live or die by their ability to solve the Cold Start Problem. We require tooling to simulate,  interrogate, and forecast network formation under different strategic assumptions so that (1) execution aligns  tightly with model predictions, (2) investor communication is concrete and falsifiable, and (3) once launched, live telemetry can be visualized against projections for adaptive steering. 

Project Description  

The solution includes the implementation of a Domain Specific Language embedded in a Common Lisp REPL and a  visualization engine provided through a webapp. The DSL will be provided and is designed to closely mirror the  thinking process of membership network creation experts and strategists. Lisp will be the supporting language with  its highly interactive Read-Eval-Print-Loop and IDE ecosystem. All of Lisp's capabilities will be directly exposed to the  user so they can create anything from simple imperative simulations (scripts) to applicative algorithms to functional  schemes, etc. The webapp will be responsible for presenting the visualizations in extremely sophisticated and  polished ways. (This is not a behind-the-scenes utility.) 

Technologies and Other Constraints

Steel Bank Common Lisp (SBCL) will be used exclusively, along with items like QuickLisp for package  management and preferably Emacs + Slime for the IDE. Technologies under consideration for bridging  the REPL to server and/or webapp include drakma, hunchensocket, woo + woo-websock, clws and  websocket-driver. Strong preferences for the bridge will be taken into consideration; otherwise, Decidio  will provide initial guidance. The visualization engine should run purely on the server and/or webapp  client. The stack will be standard full-stack TypeScript/JavaScript, CSS, HTML and Node.js. The visualization itself will be a Temporal Force-Directed Graph (similar to https://observablehq.com/@d3/ temporal-force-directed-graph) with full playback capability. For this we recommend using D3.js.  

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

Hitachi Energy develops advanced transformer design optimization software, including AIDA, to support engineering teams in creating efficient and reliable solutions. Comprehensive documentation is critical for maintainability, onboarding, and accelerating development cycles.

Background and Problem Statement

The AIDA codebase, written in C#, currently lacks adequate documentation, making it challenging for developers to understand class structures, methods, and parameters. Manual documentation is time-consuming and prone to inconsistencies. There is a need for an automated, AI-driven approach to generate accurate and searchable documentation for the entire codebase.

Project Description

The goal of this project is to automate documentation generation for the AIDA C# codebase using AI and modern tooling:

  • Parse C# files with OpenAI to generate XML doc comments and inline explanations for missing documentation.
  • Leverage Roslyn to extract structured metadata: 
    • Class summaries
    • Method descriptions
    • Parameter and return details
  • Aggregate extracted data into JSON or Markdown format for portability.
  • Use OpenAI and DocFX to transform this data into searchable, user-friendly documentation for the entire codebase.

This solution will ensure maintainability, improve developer productivity, and provide a scalable approach for future projects.

A high-level view of the proposed solution is as shown below:

Expected Outcomes and Benefits

  • Comprehensive Documentation: Automatically generate XML doc comments and inline explanations for all classes and methods. 
  • Improved Developer Efficiency: Reduce onboarding time and accelerate feature development by providing clear, searchable documentation. 
  • Traceability and Consistency: Ensure uniform documentation standards across the codebase. 
  • Scalable Solution: Create a repeatable process for other C# projects within Hitachi Energy. 
  • Student Learning Benefits: 
    • Hands-on experience with AI-assisted code parsing and documentation generation.
    • Exposure to Roslyn for code analysis and DocFX for documentation publishing.
    • Practical understanding of integrating OpenAI APIs into enterprise workflows.
    • Opportunity to work on a high-impact project improving software maintainability.

Technologies and Other Constraints

Develop the solution as a SaaS tool hosted on Microsoft Azure, leveraging OpenAI APIs, Roslyn, and DocFX within the appropriate technology stack. While designed for automated documentation generation with minimal human intervention, incorporate a human-in-the-loop approach to allow developers to review, provide feedback, and override AI-generated comments and summaries.

Students will be required to sign over IP to sponsors when the team is formed.

Sponsor Background

Hitachi Energy serves customers in the utility, industry, and infrastructure sectors with innovative solutions and services across the value chain. Together with customers and partners, we pioneer technologies and enable the digital transformation required to accelerate the energy transition toward a carbon-neutral future.

Background and Problem Statement

Hitachi Energy specializes in traction transformers designed for transportation applications. These transformers deliver high uptime, safety, and reduced energy costs through superior efficiency and lightweight construction. They are engineered for resilience in harsh environments and unstable grids, ensuring low maintenance and reliable performance.

Currently, tracking the status of projects within the Traction Transformer portfolio requires manual collation of data from multiple sources (project management tools, emails, spreadsheets). This process is maintained in Excel, which is not scalable and consumes significant human resources.

Project Description

The objective of this project is to develop an automated AI-driven pipeline that:

  • Aggregates project status data from diverse sources (tools, emails, spreadsheets).
  • Parses and infers insights from the collected data for actionable intelligence.
  • Generates dynamic Power BI dashboards for real-time project tracking.
  • Implements a custom notification engine to deliver tailored updates to relevant stakeholders.

This solution will streamline project monitoring, reduce manual effort, and enhance decision-making through timely and accurate information.

A high-level view of the proposed solution is as shown below:

Expected Outcomes and Benefits

  • Improved Efficiency: Reduce manual effort and time spent on project tracking by automating data aggregation and reporting.
  • Real-Time Visibility: Enable stakeholders to access up-to-date project status through interactive dashboards.
  • Proactive Communication: Deliver timely, customized notifications to relevant teams, improving collaboration and responsiveness.
  • Scalability: Create a solution that can handle multiple projects and diverse data sources without additional overhead.
  • Student Learning Benefits: 
    • Hands-on experience with AI-driven automation and data engineering.
    • Exposure to Power BI for visualization and reporting.
    • Practical understanding of integrating cloud services and notification systems.
    • Opportunity to work on a real-world problem with measurable business impact.

Technologies and Other Constraints

Develop the solution as a SaaS tool hosted on MS Azure, leveraging the appropriate technology stack. While designed for minimal human intervention, incorporate a human-in-the-loop approach to allow feedback and override AI decisions.

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

Progress Software is a global enterprise software company that builds tools and platforms used by developers and organizations worldwide to create, deploy, and operate modern applications. As part of our ongoing focus on innovation and AI-driven transformation, this project is sponsored by Progress’s centralized Software Development Lifecycle (SDLC) organization, which is responsible for evolving how 1,000+ engineers across the company build software—exploring how AI, automation, and agent-based systems can meaningfully improve developer productivity, software quality, and delivery speed.

Background and Problem Statement

Progress engineering teams work across multiple CI/CD tools—Jenkins, GitHub Actions, Harness, Azure DevOps, and Buildkite—each with different conventions, security controls, and maturity levels. Engineers are often asked to perform tasks in a tool they don’t know (X) even though they are proficient in another (Y). While our SDLC strategy emphasizes Pipelines-as-Code, reusable components, and embedded governance, there is no capability that guides engineers step-by-step to implement changes “the Progress way” in unfamiliar tools.

This gap leads to slower onboarding, uneven compliance, duplicated effort, and inconsistent application of shared modules/templates. The problem is not a lack of standards or inner source assets—it’s making those assets immediately actionable, with guided translation from the engineer’s starting point to the target tool and pipeline pattern.

Project Description

Design and prototype an AI assistant that helps engineers complete real CI/CD tasks in any of our pipeline tools by translating from what they already know (Y) to what they need to do (X) while enforcing Progress best practices.

What the assistant will do:

  • Guided Translation (Y→X): Map familiar pipeline concepts (e.g., Azure DevOps YAML stages) to the target tool (e.g., Harness pipelines), providing side-by-side examples and recommended Progress approved modules.
  • Practice Aware Recommendations: Surface the right reusable template, shared step, security scan, or policy gate at the moment of need; explain why each control exists.
  • Task Walkthroughs: Generate a checklist and code snippets (YAML, JSON, shell) to complete tasks like “create a common Harness module” or “migrate build/test steps to a reusable GitHub Action,” with inline rationale tied to our standards.
  • Compliance by Design: Embed Progress governance (license checks, vulnerability scanning, quality gates) into the suggested workflow so implementations align with policy from the start.

Example Use Case:

“I know Azure DevOps in MOVEit. Help me write and roll out a new common module in Harness the Progress way—explain each step in Azure DevOps terms so I learn while doing.”

Outcomes for end users: Faster, compliant delivery; reduced support burden on experts; increased reuse of golden modules; improved SDLC scorecard metrics (lead time, deployment frequency, change failure rate).

Technologies and Other Constraints

Preferred technologies (flexible based on availability):

  • LLM & Orchestration: Azure OpenAI (or opensource LLM), LangChain/LlamaIndex for tool aware workflows.
  • Knowledge & Retrieval (RAG): Indexed Progress materials (standards, templates, inner source repos) from SharePoint/Confluence/GitHub; vector store for semantic search.
  • CI/CD Integrations
    • GitHub Actions (REST/GraphQL APIs, reusable workflows)
    • Harness (Pipelines API, modules)
    • Azure DevOps (Pipelines API)
    • Jenkins (Pipeline libraries)
    • Buildkite (APIs)
  • Governance & Quality: OPA/Rego for policy gates; Black Duck and SonarQube integration points.
  • Implementation Stack: TypeScript or Python; containerized service; minimal web UI (React or simple Flask/FastAPI) for prompts, walkthroughs, and snippet output.
  • Auth & Access: Entra ID/Azure AD for authentication (mocked or sandboxed for student use).

Constraints and guidance:

  • Use publicly shareable or synthetic datasets for standards/examples; do not access confidential Progress material in the student environment.
  • Focus on pipeline tasks (not all SDLC tools) to ensure scope fits for a semester.
  • Deliver a working prototype with at least two tool mappings (e.g., Azure DevOps → Harness; Jenkins → GitHub Actions).
  • Include evaluation metrics (task completion time, governance coverage, reuse of shared modules) in the demo.

Sponsor Background

Siemens Healthineers develops innovations that support better patient outcomes with greater efficiencies, enabling healthcare providers to meet the clinical, operational, and financial challenges of a rapidly changing healthcare landscape. As a global leader in medical imaging, laboratory diagnostics, and healthcare information technology, Siemens Healthineers has deep expertise across the entire patient care continuum, from prevention and early detection to diagnosis and treatment.

Within Siemens Healthineers, the Managed Logistics organization supports service engineers who perform planned and unplanned maintenance on imaging and diagnostic equipment worldwide. The Managed Logistics team plays a critical role in ensuring that the right replacement parts reach the right engineer at the right place and time, directly contributing to reliable patient care.

Background and Problem Statement

Siemens Healthineers’ software developers and data analysts rely on a large and growing body of internal documentation, including OneNote notebooks, PDF documents, and code repositories. Over time, this information has become difficult to search and navigate, leading to siloed knowledge, inconsistent understanding across the team, and challenges onboarding new team members efficiently.

Currently, finding relevant information often requires manual searching, asking colleagues for guidance, or relying on institutional memory. These approaches can be time-consuming and prone to misunderstandings. As teams and documentation continue to grow, there is a clear need for a more effective way to surface relevant internal knowledge and ensure that team members share a common, accurate understanding of processes, systems, and best practices.

Project Description

The goal of this project is to design and build an Internal Knowledge Companion: a web-based, AI-powered assistant that helps internal users quickly find, understand, and reference information contained within Siemens Healthineers’ internal documentation.

The envisioned solution will leverage retrieval-augmented generation (RAG) techniques, combining an open-weight large language model with a document retrieval system. Rather than training a language model from scratch, the system will retrieve relevant content from internal documents at query time and use the language model to generate clear, context-aware responses grounded in those sources.

Example use cases include:

  • A new team member asking, “How does our order escalation process work?” and receiving a concise answer with links or citations to the original documentation.
  • A developer querying, “Where is the data model for lead-time forecasting defined?” and being pointed to relevant design documents and repositories.
  • Analysts asking domain-specific questions and receiving answers supported by source attribution to reduce ambiguity and build trust.

The system will be designed to adapt over time, allowing new documents to be added without retraining the language model. By improving information accessibility and consistency, the Internal Knowledge Companion will enhance onboarding, reduce time spent searching for information, and help teams stay aligned.

Technologies and Other Constraints

Requirements:

  • Retrieval-Augmented Generation (RAG) architecture
  • Web-based user interface
Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background 

Alen Baker is a 1973 graduate of the NCSU CSC department, and 2017 Hall of Fame inductee. He sponsored multiple Senior Design teams over his 20 years employed by Duke Energy. After retiring, he established the Fly Fishing Museum of the Southern Appalachians in Bryson City. He is currently developing a non-profit fly fishing center to support non-profit organizations with education and activities that utilize fly fishing as a means of recovery and enrichment, including programs for veterans with PTSD, cancer patients, foster youth, and scouting merit badges.

Background and Problem Statement 

The Cap Wiese Fly Fishing Center resides within the historic Patterson School Campus. The private middle to high school closed in 2009 and reopened as a community education center for various education opportunities such as STEM camps and conservation education. In 2024, Alen Baker proposed that the campus be open to fly fishing organizations that ”give back” via fly fishing instruction and support to those that have an interest in fly fishing as a recovery mechanism as well as conservation. Fly fishing activities are spread among multiple classrooms as well as the campus bodies of water. The campus is 1400 acres in total. This makes it difficult to know and record who visits the campus and where they are located. Obtaining a liability of waiver and the identity of anglers utilizing the facilities is a challenge, especially with the limited office hours. Automation would allow a check-in process to be available 24/7. Digital records would allow for usage analysis.

Project Description 

This project will replace the existing manual campus access procedures with a fully automated, easy-to-use QR- activated cellular phone application. Essential functions include creating and maintaining an identity record for each individual who has physical contact with the campus, as well as obtaining signed waivers for both adults and minors. Upon access, a notification will be issued to the administrative office and a record of the event will be posted for future viewing and analysis. 

Desired data for retention and functional use may include identity records of individuals including selected specifics that profile for fly fishing interests, signed waivers for liability release, event details, organizational partnerships and linked inventory records related to applicable assets.

Technologies and Other Constraints 

In addition to Google technologies, the student team will employ software to create and deploy a mobile app, including software to read QR codes.

Sponsor Background

Hitachi Energy is committed to delivering high-quality products and services across the energy value chain. Ensuring product reliability and compliance with stringent quality standards is critical to maintaining customer trust and operational excellence.

Background and Problem Statement

Quality teams currently manage disposition workflows and analyze historical failure data through manual processes and fragmented tools. This approach increases the risk of overlooking test failures, delays in corrective actions, and limited traceability of issues. There is a need for an intelligent, automated system that not only streamlines workflows but also provides transparent reasoning behind AI-driven recommendations.

Project Description

The objective of this project is to develop an AI-enabled Quality Support Platform that:

  • Automates disposition workflows for test failures and quality checks.
  • Provides historical failure insights to guide decision-making and improve reliability.
  • Delivers transparent AI recommendations with drill-down capabilities for root cause analysis.
  • Integrates with existing quality systems for seamless adoption and scalability.

This solution will empower quality teams with actionable insights, reduce manual effort, and enhance overall product quality.

A high-level view of the current and proposed solution is as shown below.

Expected Outcomes and Benefits

  • Reduced Missed Failures: Minimize chances of overlooking test failures through automated detection and alerts. 
  • Workflow Automation: Streamline dispositioning and status tracking to improve efficiency. 
  • Enhanced Traceability: Ensure complete visibility of issues and corrective actions for compliance and audits. 
  • Improved Quality: Leverage historical data and AI-driven insights to proactively address recurring problems. 
  • Learning Benefits: 
    • Hands-on experience with AI for quality assurance and workflow automation.
    • Exposure to explainable AI techniques for transparent recommendations.
    • Practical skills in integrating AI solutions with enterprise systems.
    • Opportunity to contribute to a high-impact project improving operational excellence.

Technologies and Other Constraints

Develop the solution as a SaaS tool hosted on MS Azure, leveraging the appropriate technology stack. While designed for minimal human intervention, incorporates a human-in-the-loop approach to allow feedback and override AI decisions.

Students will be required to sign over IP to sponsors when the team is formed.

Sponsor Background

Hitachi Energy is a global leader in transformers and digital solutions. The CoreTec is a piece of Linux hardware which monitors transformer health using data from sensors. It is critical for the transformer ecosystem because it allows operators to track and fix issues before they become larger problems.

Background and Problem Statement

The CoreTec currently uses a basic hardware watchdog to maintain system stability. In general, the purpose of a hardware watchdog is to restart a system whenever a critical issue occurs.

With the current implementation, the hardware watchdog toggles a GPIO (General Purpose Input/Output) flag every second. If the flag is not toggled within a certain time interval, the watchdog assumes there is a critical issue and triggers a hardware restart.

This type of hardware watchdog is effective for detecting system faults, but it does not monitor process-level failures or system resource usage. It would be beneficial for the CoreTec to have a more fully-featured watchdog.

Project Description

The goal of this project is to implement a software watchdog which monitors system resources and processes. The project will improve system resilience and provide more granular control over issue recovery.

Technical Details:

  • The watchdog must monitor CPU usage, memory usage, and incoming zeroMQ messages for a set of configurable processes. Then, when usage/cycle thresholds are met, the watchdog should execute a configurable EXEC command.
  • The software watchdog should toggle a bit on every cycle to indicate its health. Then, the existing hardware watchdog should check the bit and conduct a power cycle if the bit has not changed over a set time period.
  • The application must be integrated with systemd and run indefinitely as a system service. Systemd is a system and service manager which typically handles system boot, processes, and system resources.
  • All watchdog configuration settings must be stored in a JSON file which can be edited by the developers. A user interface for configuration is not part of this project.

Technologies and Other Constraints

The technologies are required unless otherwise stated.

Development Environment

  • Visual studio 2022 (Community license)
  • WSL ubuntu 20.04

Programming Language

  • C++ 17 or C++ 20
  • Avoid C++ 23 or later due to possible deployment incompatibility

Build Tools

  • cmake, version 3.27 or later
  • gcc with the following flags:
    • -Wall -Wextra -Wshadow -Wnon-virtual-dtor -pedantic
    • The compiler should be free of warnings.

Static Analysis Tools

  • Clang-tidy (integrated with cmake)
  • ASan Address Sanitizer
    • Integrate address sanitizer in debug mode
    • gcc -fsanitize=address,undefined
  • Sonarcloud (optional)

External Libraries

JSON File

  • All watchdog settings must be stored in a single JSON configuration file. The JSON file should include settings for the watchdog itself and for each of the monitored processes. The project sponsor will share more information on it.

Other Tools and Constraints

  • Use legacy libprocps version 3.x API (openproc/readproc) to sample per-PID CPU% and RSS.
  • OS command/utilities must be called using the Linux C/C++ API. Do not fork shell or use system(), exec(), or system other calls.
Students will be required to sign over IP to sponsors when the team is formed.

Sponsor Background 

The Laboratory for Analytic Sciences is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information. 

Background and Problem Statement 

Artificial Intelligence models can now perform many complex tasks (e.g. reasoning, comprehension, decision-making, and content generation) which until recent years have only been possible for humans. Like humans though, an AI model generally works best on tasks that it was specifically trained to perform. While general-purpose models (often called foundational models, or pretrained models) can have surprisingly strong performance across a range of applications in their domain, they are typically outperformed within any particular subdomain by a model which was specifically trained for that more narrow subdomain. The most common approach to building these more specialized models is to start with a foundational or pretrained model, and then fine-tune it with a dataset in the more narrow subdomain so that the result is specifically trained, and hyper-focused, on that subdomain. 

For example, consider the speech-to-text (STT) model Whisper from OpenAI. Out of the box, this model is capable of producing very accurate transcriptions over a wide range of speech audio recordings (i.e those having differing languages, dialects, accents, noise environments, verbiage, etc). Now suppose that a user is only concerned with transcribing speech audio originating from a single environment and a single speaker, e.g. perhaps a recording of a professor’s lectures throughout a semester. This is a far

more narrow subdomain of application. A data scientist could, of course, apply Whisper and move on to other projects. However, if squeezing out the best accuracy possible is deemed worth the effort, then that data scientist could consider fine-tuning a custom version of Whisper for this particular application. 

To fine-tune Whisper, the data scientist would start by considering Whisper to be a pretrained model, i.e. a starting point for the eventual model to be trained. Then the user could gather a relatively small set of labeled data, meaning recordings that are manually transcribed with ground truth transcriptions. In the lecture recording example, this might mean going to class for the first week of the semester, recording the audio, and manually transcribing everything that was spoken. With this labeled dataset in hand, the next step would be to fine-tune Whisper. Optimal procedures for fine-tuning an AI model can be a very complex process, and is perhaps both an art and a science, but general procedures are generally available. The result will be a fine-tuned Whisper variant that, in all likelihood, will produce more accurate speech-to-text results, for future recordings of that professor’s class, than the original Whisper model will. Important to note, this fine-tuned model may presumably perform worse than the original Whisper model on most other applications. 

Working with previous senior-design teams, LAS has developed an online tool, TuneTank, to help streamline the process of fine-tuning a Whisper model to a given dataset.  It is expected that this will enhance the efficiency and effectiveness of the process/results.  The existing, fine-tuning interface has very basic support for evaluating the fine-tuned model or for selecting a model that is best suited to a particular dataset.

Project Description 

To complement TuneTank, the LAS would like a Senior Design team to develop a python program to evaluate whisper finetunes. Given one or more whisper models, the program should benchmark each of the models on several pre-determined and user-specified datasets using metrics like levenshtein distance and word error rate. Once the benchmark is completed, the program should recommend the best overall whisper model and the best whisper model for special use cases (noisy data, multilingual data, etc.)

The LAS will provide the team with one or more data set(s) with which to use for development and testing. The LAS will also provide the team with experienced mentors to assist in understanding the various AI aspects of this project, with particular regards to the fine-tuning methodologies to be implemented. However, this is a complex topic so at least half the team should have strong interest in the topic of machine learning/artificial intelligence. 

Technologies and Other Constraints 

The team will have great freedom to explore, investigate, and design the benchmarking system described above. However, the methodology employed should not have any restrictions (e.g. no enterprise licenses required). In general, we will need this technology to operate on commodity hardware and software environments, and only make use of technologies with permissive licenses (MIT, Apache 2.0, etc). Beyond these constraints, technology choices will generally be considered design decisions left to the student team. The LAS will provide the student team with access to AWS resources for development, testing and experimentation, including GPU availability for model training. 

ALSO NOTE: Public distributions of research performed in conjunction with USG persons or groups are subject to pre-publication review by the USG. In the case of the LAS, typically this review process is performed with great expediency, is transparent to research partners, and is of little to no consequence to the students. 

Sponsor Background 

The Laboratory for Analytic Sciences is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information. 

Background and Problem Statement 

Artificial Intelligence models can now perform many complex tasks (e.g. reasoning, comprehension, decision-making, and content generation) which until recent years have only been possible for humans. Like humans though, an AI model generally works best on tasks that it was specifically trained to perform. While general purpose models (often called foundational models, or pretrained models) can have surprisingly strong performance across a range of applications in their domain, they are typically outperformed within any particular subdomain by a model which was specifically trained for that more narrow subdomain. The most common approach to building these more specialized models is to start with a foundational or pretrained model, and then fine-tune it with a dataset in the more narrow subdomain so that the result is specifically trained, and hyper-focused, on that subdomain. 

For example, consider the speech-to-text (STT) model Whisper from OpenAI. Out of the box, this model is capable of producing very accurate transcriptions over a wide range of speech audio recordings (i.e those having differing languages, dialects, accents, noise environments, verbiage, etc). Now suppose that a user is only concerned with transcribing speech audio originating from a single environment and a single speaker, e.g. perhaps a recording of a professor’s lectures throughout a semester. This is a far

more narrow subdomain of application. A data scientist could of course apply Whisper and move on to other projects. However, if squeezing out the best accuracy possible is deemed worth the effort, then that data scientist could consider fine-tuning a custom version of Whisper for this particular application. 

To fine-tune Whisper, the data scientist would start by considering Whisper to be a pretrained model, i.e. a starting point for the eventual model to be trained. Then the user could gather a relatively small set of labeled data, meaning recordings that are manually transcribed with ground truth transcriptions. In the lecture recording example, this might mean going to class for the first week of the semester, recording the audio, and manually transcribing everything that was spoken. With this labeled dataset in hand, the next step would be to fine-tune Whisper. Optimal procedures for fine-tuning an AI model can be a very complex process, and is perhaps both an art and a science, but general procedures are generally available. The result will be a fine-tuned Whisper variant that, in all likelihood, will produce more accurate speech-to-text results, for future recordings of that professor’s class, than the original Whisper model will. Important to note, this fine-tuned model may presumably perform worse than the original Whisper model on most other applications. 

The Laboratory for Analytic Sciences (LAS) has been fine-tuning AI models for many years, and expects to continue doing so for many more. So, it would be desirable to make this process as efficient, effective, and user-friendly as possible. In general, fine-tuning efforts at the LAS are done on an individualized basis, using a disorganized bevy of Jupyter Notebooks and data formatting scripts. This introduces unwelcome overhead into the actual process of creating useful models quickly.

A fall 2025 senior design team helped to automate and simplify the process of fine-tuning a model.  They created TuneTank, a web application that lets users create and manage a queue of fine-tuning jobs.  The user can create new jobs, with an easy-to-use interface for specifying the most essential fine-tuning parameters, and the application offers a basic interface for starting, suspending and resuming fine-tuning jobs.

Project Description 

TuneTank focused primarily on ease-of-use through a real-time web UI. We would like to build a second version of TuneTank that supports additional finetuning parameters and techniques like lora or quantized training. Given some of the architectural limitations of TuneTank (docker and wav2vec clustering integration) and feedback from the previous team, starting from scratch and using a different tech stack is probably the way to go.

The LAS will provide the team with one or more data set(s) with which to use for development and testing. The LAS will also provide the team with experienced mentors to assist in understanding the various AI aspects of this project, with particular regards to the fine-tuning methodologies to be implemented. However, this is a complex topic so at least half the team should have strong interest in the topic of machine learning/artificial intelligence. 

NOTE: Commercial applications for the purpose described above do already exist in some form on the market. If the team decides to take inspiration (or even portions of actual software) from such applications that is fine with the LAS…so long as the constraints below are not violated, nor of course any legal restrictions. 

Technologies and Other Constraints 

The team will have great freedom to explore, investigate, and design the fine-tuning system described above. However, the methodology employed should not have any restrictions (e.g. no enterprise licenses required). In general, we will need this technology to operate on commodity hardware and software environments, and only make use of technologies with permissive licenses (MIT, Apache 2.0, etc). Beyond these constraints, technology choices will generally be considered design decisions left to the student team. The LAS will provide the student team with access to AWS resources for development, testing and experimentation, including GPU availability for model training. 

ALSO NOTE: Public distributions of research performed in conjunction with USG persons or groups are subject to pre-publication review by the USG. In the case of the LAS, typically this review process is performed with great expediency, is transparent to research partners, and is of little to no consequence to the students.

Sponsor Background

OpenDI's mission is to empower you to make informed choices in a world that is increasingly volatile, uncertain, complex, and ambiguous. OpenDI.org is an integrated ecosystem that creates standards for Decision Intelligence. We curate a source of truth for how Decision Intelligence software systems interact, thereby allowing small and large participants alike to provide parts of an overall solution. By uniting decision makers, architects, asset managers, simulation managers, administrators, engineers, and researchers around a common framework, connecting technology to actions that lead to outcomes, we are paving the way for diverse contributors to solve local and global challenges, and to lower barriers to entry for all Decision Intelligence stakeholders.

OpenDI’s open source initiative is producing the industry standard architecture for Decision Intelligence tool interoperability, as well as a number of example implementations of OpenDI compliant tools and associated assets. The initiative's philosophy is to develop in the open, so all projects are available on github.com. 

Background and Problem Statement

Decision Intelligence is a human-first approach to deploying technology for enhancing decision making. Anchoring the approach is the Causal Decision Model (CDM), comprising actions, outcomes, intermediates, and externals as well as causal links among them. CDMs are modular and extensible, can be visualized, and can be simulated to provide computational support for human decision makers. The OpenDI reference architecture provides a specification of CDM representation in JSON as well as defines an API for exchanging CDMs; however, there is no existing tool that allows curation, provenance, and sharing of these extensible CDMs. This project will provide OpenDI’s Model Hub, similar to Docker Hub for containers or Hugging Face for AI models, to allow public browsing, searching, and sharing of CDMs.

The current state of the OpenDI Model Hub is a partial implementation which lacks the richness and robustness needed to be a place for community contributions of DI models. In particular, tooling to support the local creation or editing of models that can be pushed and pulled to the hub are required. 

Project Description

The best way to think about OpenDI’s Model Hub is by looking at Docker Hub. 

Users should be able to:

  1. Register for accounts using an Oath2 provider (like Google, Meta, or Apple)
  2. Take models created using other tools (provided by OpenDI) and push those models to the Model Hub
  3. Pull models from the Model Hub
  4. Track the history and provenance of extensions or modification to models 
  5. Visualize the characteristics, history, and provenance of models
  6. Control whether models are publicly available or shared with specific other users
  7. View and search models they have access to 
  8. Use a CLI utility and/or the OpenDI Authoring Tool to interact with the Model Hub

The existing model hub has the basic functionality for requirements 1, 3, 4, 5, and 6 as a proof of concept. Students this semester will emphasize a full account integration (requirement 1), model ownership and sharing (requirement 6), and tool creation integration (requirement 8). 

Technologies and Other Constraints

This project will require the team to contribute directly to the OpenDI opensource assets. OpenDI assets are developed publicly on GitHub, and the result (and process) of this project will be hosted there as well---students will be expected to contribute to the public OpenDI repositories on github.com. This means team members will be expected to follow OpenDI community contribution standards and to contribute their work under the license OpenDI selects. Team members are encouraged to use their own GitHub accounts to get credit for their contributions.

The existing Model Hub has a backend written in Go and frontend in React. Students will be extending these implementations, so familiarity with Go and React is encouraged. A prior team began development of a CLI tool in Python, which this semester’s team will extend. 

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

Okan Pala and the NC State Office of Research Commercialization are working  together to develop a “proof-of-concept” location-based ad platform. A Spring 2025 Senior Design team worked initially on this and created an application that has the backbone of an advertising system. We aim to build on top of previous work completed to add improved functionality to  complete the initial version of the system.

Background and Problem Statement

Problem 1: Direct interactivity with mobile digital displays (MDDs) and/or fixed displays does not exist. Static advertisements on vehicle-top and in-vehicle displays are cumbersome, expensive, and not targeted to specific individuals or groups. A big segment of the population is cut off from the market. There is no easy way  for a mom-and-pop store to put advertisements on taxis. They most likely need to go through an advertising agency, and it won’t be targeted (geographically and temporally) in most cases. Similarly, there is no avenue for individual expressions on public digital displays (mobile or otherwise).

Problem 2: There is no personalization, customization or user input into the advertisements/messages  placed on vehicles (especially within the specific proximity of a given location).

Problem 3: Personal expression of messages in public spaces through digital displays are not available and when available severely restricted or not geo-targeted.

Problem 4: Accurate evaluation and reporting of the effectiveness of digital displays that determines the  return of investment for the advertisers is hard to achieve. There is a need to implement solutions to  measure and report the effect of mobile digital display advertisements (or personal messages) through  computer vision and other approaches.

Project Description

This project involves developing a mobile app and software infrastructure to manage and deliver geo-specific and temporally targeted ads and personal messages to digital vehicle-top and in-vehicle displays. Users will have control over the message or ads’ timing (temporal  control) and location (geo-fenced).

An initial proof-of-concept app-based system that places ads/personal messages on MDDs  was  developed in Spring 2025 by a previous senior design team. It included a bidding system to allow users to outbid others for a message (or ad) to be shown at a specific time and place. There also should be an  option for the passengers of vehicles or the owners/drivers of the fleet to interact with the MDDs through another display placed inside the vehicle. This could be a part of the customer facing app or the  interface that is used by the advertisers.

The revenue sharing business logic is based on real-time revenue from mobile digital displays and the effectiveness of each campaign. Ad or messaging revenues from digital displays are shared with the  vehicle (or fleet) owner. The application business logic should be specific to the user type and should be  accompanied by a user-specific interface. We have identified five user groups. These are the Advertisers,  Providers, Riders and the Ad Targets (people who are the target audience for the advertisement or  messaging). The fifth user type is the system manager that will interact with/oversee the system who is  the representative of the company itself.

The “Advertisers” can be Ad agencies, large corporations, local chains, local single businesses (this  includes political campaigns), riders themselves, individual users (non-business) and governmental users  (local, regional, national). The advertising users may or may not need content creation assistance.  An existing version of the app includes an AI image generator to help users create ad campaigns (minus an  incorporated QR code generator). Advertisers also need a spatially enabled dashboard to be able to  track their campaign (included in the previous version) and evaluate the effectiveness of each campaign  (still needed). This dashboard can receive information from the ad evaluation system and individual  displays, mobile apps, accounting system etc. Advertiser account creation (Individual and corporate),  spatially/temporally enabled campaign creation that includes an AI assisted ad campaign creation, as  well as bidding business logic was developed in a previous version. The previous version also includes an  admin panel for advertisers to review their existing campaigns and make changes.  

The “Providers” are grouped as individual vehicle owners (i.e. uber driver, independent taxicab owner,  personal vehicle owner, etc.) and fleet owners (Taxicab companies, businesses with their own service  fleets such as HVAC companies, etc.). In the future this also may include government entities.

The “Riders” are composed of the private citizens or corporate partners that would be our future  affiliates. This group is valid only for the MDDs that are on vehicles for hire. The “Ad targets” are the  people who potentially would see the ad or message and respond to it in some way. One example for this is them clipping a digital coupon through a QR code provided on the ad. Explicit interaction through Ad target action or implicit data created through computer vision (i.e. a near-time eye contact detection with a counter) would be some of the inputs that would make up the ad evaluation business logic. This also might include external data sources such as Google crowd and business customer presence data with other available metrics.

The last user type is the “System Manger,” This user type will have sub-types in the future with varying  privileges but initially this is the user that will oversee the campaigns, intervene when needed and  approve the campaigns with assistance from an AI helper.

We would like the existing API to be expanded in such a manner that, in the future, the system could  work with other entities (e.g., Uber and Lyft) to expand past the Taxi industry and individual drivers.

Examples:

McDonalds 2-5pm 50% off on Coffee products

Local McDonald’s store chain owners agree to a joint ad campaign with a 50% off coupon linked to a QR code displayed on the ad. Three versions of  this ad campaign are created. One is to show within 3 miles proximity to each McDonalds restaurant  location with a higher bid limit, one is to show in the whole Triangle region with very low bid and another to be  shown in urban areas within 1-mile walking distance of the restaurant locations with a high bid. Each MDD displays 6 ads in each minute, depending on their location and the bids’ locations and amounts.  The length of each bid being displayed varies depending on the bids. In addition, if a QR code is used that is  linked from the MDD, then the company pays another additional amount.

Ride-hail or taxi cab hired

After entering the ride vehicle (or in advance), the rider would pay the system to display their own  ad/personalized message on the MDD.  The rider would get a slight priority on the bidding system. The driver (or the fleet owner) can choose either to generate revenue by  allowing others to display an ad on their MDDs or choose to display their own ads with heavy priority on  the bidding system.

Big events are large revenue opportunities since mobile digital display ads or messages will get more  impressions. The advantage of this app is that the potential for income increases during special events.  For example, some special events and crowded areas would provide more opportunities for advertisers  to reach wider segments. The more people in the area to see the ads means that the more revenue is  generated hence the higher the income for the provider.

Personalization of Vehicle-top and In-vehicle displays

When a vehicle picks up the call, exterior display systems start displaying personalized messages, logos,  etc. immediately, so that the app users can recognize the taxi coming to pick them up. This could also be  used by corporate customers for their own people.

Riders get a choice to display their own message or ad with a biding advantage. For example, corporate  customers may choose to display their own corporate logo, message, or ad.

Technologies and Other Constraints 

We are flexible about technology. The team should research the best available technology and use that  for each component and make the system design accordingly. As for location-specific analysis, we know  that ESRI (GIS software system vendor) has the technology for geofencing. They also do have a  development platform for app development, but we are not sure if this is the best option for a robust  application. We will start out with the technology choices that the previous team made and change as necessary.

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

SAS provides technology that is used around the world to transform data into intelligence. A key component of SAS technology is providing access to good, clean, curated data.  The SAS Data Management business unit is responsible for helping users create standard, repeatable methods for integrating, improving, and enriching data.  This project is being sponsored by the SAS Data Management business unit in order to help users better leverage their data assets.

Background and Problem Statement

An increasingly prevalent and accelerating problem for businesses is dealing with the vast amount of information being collected. Combined with lacking data governance, enterprises are faced with conflicting use of domain-specific terminology, varying levels of data quality/trustworthiness, and fragmented access. Lack of timely and accurate data reporting may end up driving poor decisions, operational inefficiencies, and financial losses. This is in addition to the exposure of businesses to regulatory penalties, compliance failures, and reputational damage and ultimately putting businesses at a competitive disadvantage. 

To help address the underlying issues related to managing their data, a business may buy, or build, a data governance solution that allows them to holistically identify and govern an enterprise's data assets. SAS has developed a data catalog product which enables customers to inventory the assets within their SAS Viya ecosystem. The product also allows users to discover various assets, explore metadata, and visualize how the assets are used throughout the platform.

In past semesters, SAS student teams have explored different avenues and extensions to a data catalog such as operationalizing governance and creating accessible components to visualize metadata. In this project, we'd like to focus instead on the discovery and, primarily, exploration phase. A data catalog provides a curated view of the metadata that is stored within the environment. This is with the intention to provide easy to understand dashboards and visualizations, about specific types of metadata, to a wide-spread audience. When displaying cataloged assets to an end user, a view of the metadata for a specific context or user persona is presented. The view will hide some of the metadata and the underlying complexity. 

And this is where we get to one of the weaknesses of visualizing a data catalog. The catalog provides a metadata system to support storing any type of metadata object, but displaying that information to end users (outside of just an API) in a useful way is difficult without an understanding of the context (including the business, domain, and user personas). 

What the sponsor would like to investigate in this project is a metadata explorer. An interface that gives the user unfiltered access to all the metadata in the environment (that they are authorized to view). If the catalog is akin to a traditional, physical library card system then the explorer is akin to walking through the bookstacks or shelves. An explorer opens up the possibilities for end users. If we're already providing an interface to view the metadata, then why not also be able to edit it? Or for example, once a user understands the underlying metadata then they could create a customized dashboard.

Project Description

Ingest Metadata

On startup, the application must ingest/load a pre-defined set of metadata (in a JSON format). An initial set of metadata will be provided by the sponsors as well as a script for the generation of synthetic metadata. All metadata provided will conform to the Open Metadata schema (https://docs.open-metadata.org/v1.4.x/main-concepts/metadata-standard/schemas).

Metadata Explorer

The application must provide an API to perform create, read, update, and delete (CRUD) operations upon metadata in the system. 

The application must provide an easy-to-use, performant interface for users to explore the metadata available in system. The students should brainstorm various approaches or UI patterns that could be used to allow users to visualize the metadata.

The application should provide the following functionality:

  • The ability to view and edit all of the attributes associated with a given entity.
  • The ability to see all classifications/relationships associated with a given entity
  • (stretch goal): Add new attributes to an existing entity.
  • (stretch goal): Create/add new relationship.
  • (stretch goal): Ability to filter based on attributes.

Out of Scope

The following are out of scope for this project:

  • User authentication/authorization

Technologies and Other Constraints

  • Any open-source library/packages may be used.
  • This should be a web application. While the design should be responsive, it does not need to run as a mobile application.
  • React must be used for UI.
  • For any GenAI usage, open-source models must be used.

Sponsor Background

The NC State Computer Science Undergraduate Programs and Advising Team is responsible for the successful recruitment, retention, and graduation of over 1,500 undergraduate students. Our mission is to guide students through the curriculum, policies, and opportunities available within the department, college, and university. A core component of this is providing accurate, timely, and accessible advising resources to all students to help them develop complete and reasonable plans for progress towards degree. 

Background and Problem Statement

The volume and complexity of information regarding undergraduate degree requirements, concentrations, tracks, and departmental policies often make it challenging for students to navigate their academic journey. While this information is available online across various official NC State, College of Engineering, and Computer Science pages, students frequently struggle to synthesize it to create long-term plans or identify relevant co-curricular opportunities. This leads to an overwhelming number of repetitive advising questions and a reliance on human advisors for easily answerable inquiries, reducing the time advisors have for complex, individualized student needs.

The student team will create an advising chatbot that will synthesize curated information from across the university that support students in their academic journey.  The chatbot will be trained to understand the expectations behind a BS in Computer Science degree and provide feedback on students’ plans to complete their degree programs. 

Important Note: The chatbot is NOT intended to replace the required advising process, but rather to complement the work of human advisors by handling informational queries and providing students with preliminary planning tools. The advisors will remain responsible for reviewing student plans and actively supporting students as they identify concrete action items related to co-curricular activities and plans. The motivation is to streamline access to essential advising information and improve the efficiency of the advising process by focusing human advisor time on high-level guidance.

Project Description

The envisioned solution is a bespoke, web-based advising chatbot that is trained on official NC State Computer Science undergraduate degree and policy documents.

Core Functionality Use Cases:

  1. Long-Term Planning: A student asks, "What courses do I need to take in my next two semesters to complete the AI concentration?" The chatbot responds with the required courses, suggesting an optimal sequence based on prerequisites, and citing the official curriculum sheet as the source. The plan generated by the chatbot serves as a starting point for discussion with an assigned advisor.
  2. Opportunity Exploration: A student asks, "How can I get involved in undergraduate research?" The chatbot provides an overview of available programs (e.g., Undergraduate Research in Computer Science), links to the relevant application pages, and provides details about the process for getting started. The advisor will then help the student identify concrete action items, such as emailing a specific professor or filling out an application.
  3. Departmental Engagement: A student asks, "Are there any student organizations related to cybersecurity?" The chatbot lists relevant student organizations and links them to the department's calendar of events that match a student's stated interests.

The solution will require:

  • A back-end component to process natural language input, retrieve information from the corpus of official documents, and formulate accurate, cited responses.
  • A web-platform to host the user interface.
  • An authentication layer to ensure the system is accessed by authorized users.

End users (students) will benefit by gaining 24/7 access to accurate, up-to-date advising information for self-service planning. It will empower them to arrive at advising sessions with a greater understanding of their options, leading to more productive and personalized time with their advisor. Advisors will benefit by having fewer basic informational queries, allowing them to dedicate more time to complex advising, mentorship, and reviewing the plans students create using the chatbot.

A stretch goal is logging chat interactions so that the sponsor team can use the information to further refine advising materials for students.

Technologies and Other Constraints

The technology stack should be similar to the one used in CSC 326 to minimize new technology exploration and allow for the sponsors to support:

  • Java Spring Boot (backend of web application)
  • React JS (frontend)
  • Maven (build tool)
  • Junit (testing framework)
  • MySQL (database)

There will also be several extensions beyond the CSC 326 web stack

  • Docker to deploy the application

Additionally, there will need to be an exploration on the creation of a bespoke chatbot.  The team will explore options and provide a recommended solution to the project sponsors for approval.  

The sponsors will provide access to webpages and other resources that should inform creation of the chatbot’s underlying model.

Sponsor Background

Dr. Stevenson leads an Environmental Education (EE)  Research Lab within the Parks, Recreation & Tourism Management Department at NC State. One of their projects partners with Duke University Marine Lab’s Ready Set, Resilience Project. Teachers from across the state are teaching about resilience through nature fables. My job is to run the research and evaluation arm.

Background and Problem Statement

The Fall 2025 Senior Design team launched development of a web-based platform to support Ready, Set, Resilience by enabling teachers to assign assessments and collect student evaluation data in a centralized system. The platform includes reusable assessment templates, workflows to set up teachers, classrooms, and students, authentication for all user groups, and initial data visualizations. The Fall team also produced a detailed report documenting the current system, technical decisions, and recommended next steps.

In Spring 2026, the focus will shift from initial build-out to real-world testing and refinement. The team will pilot key workflows with the NC State and Duke project staff and participating teachers, then iteratively improve the system based on stakeholder feedback, especially around usability, privacy expectations, and classroom practicality. The goal is to evolve the platform into something reliable and easy to maintain, while keeping it flexible enough to adapt as the Ready, Set, Resilience program and its assessments continue to develop.

Project Description

What are your initial thoughts about a potential solution that addresses the problem presented above? Briefly describe your envisioned software solution to be developed by the student team. Use cases and examples are useful. Provide insight into how this potential solution will be beneficial to end users.

As with the Fall 2025 Senior Design team, the main purpose of the project is to provide a user-friendly, streamlined way for teachers and students involved in our project to upload evaluation and assessment data, visualize trends, and send that data to the project team for examination across classrooms and schools. A lot of this project will involve iterative design in response to stakeholder needs, which are largely still forming or may change; students interested in problem-solving with clients with real-world needs and collaborating to balance user experience with sustainable and achievable back-end design are a good fit.

The team should start by reviewing the Fall 2025 report for an understanding of where we started and ended up, as well as suggested next steps. The EE Lab can also provide guidance on prioritizing, clarifying, and potentially editing these next steps to fit current needs. Early in the process, we also anticipate meeting with groups of end-users or other stakeholders in the project; this may include school district personnel, who will be interested in data security and privacy; teachers, who will be concerned with functionality; and/or Ready, Set, Resilience program staff, who may have a variety of thoughts and feedback. We anticipate these meetings will present nuanced challenges or priorities.

Spring 2026 Deliverables (initial scope; refined with stakeholder feedback):

  • Pilot readiness and sustainability: Prepare the existing system for real-world use by improving deployment reliability and producing concise admin documentation for setup, upgrades, and basic maintenance.
  • User testing and iterative improvements: Conduct structured usability/acceptance testing with teachers and program staff, synthesize findings into a prioritized backlog, and implement the highest-impact fixes.
  • District-safe student onboarding/authentication: Design and implement a student access approach that does not require school email addresses (as needed), while still supporting privacy and responsible access control.
  • Mobile/tablet-friendly rubric scoring MVP: Create a streamlined rubric-scoring interface optimized for tablets/phones so teachers can quickly assess student work while circulating in the classroom, with results saved in the system for later review/analysis.
  • Data export/reporting improvements: Improve exports (e.g., CSV and/or printable reports) so the project team can efficiently analyze results across classrooms and schools while respecting data privacy expectations.

Over winter break, Dr. Stevenson met with a few teachers who would be end users, and those conversations raised a few ideas and questions that should provide good examples of the types of issues that may arise. For instance, one teacher raised concerns with using school email addresses for students, as the district has restrictions on external systems that link to those addresses; we have an email in with the district to inquire further. In addition, the RSR team has been working on a general assessment rubric we think will apply to lots of the RSR activities students complete; we could imagine focusing on a mobile/tablet-friendly version of this rubric that could integrate with the web-based system to allow teachers to quickly assess student work as they circulate around the room.

Technologies and Other Constraints

Technologies (used by the Fall 2025 Senior Design team):

  • Frontend: Angular web application.
  • Backend: Spring Boot REST API (secured with Spring Security and JWT authentication).
  • Database: PostgreSQL (using row-level security).
  • Deployment: Docker (Docker Compose), deployed on the sponsor’s Linux VM behind an nginx reverse proxy.
  • Authentication: Supports both Google OAuth 2.0 and local username/password login.

Other constraints:

  • Privacy and security are critical, and the system must protect confidentiality through role-based access control and database-level enforcement.
  • The system should remain simple and sustainable to operate and maintain, with low ongoing cost and effort appropriate for a grant-funded education program.

Sponsor Background

The Juntos Program (pronounced “Who-n-toes”), meaning “Together in Spanish, is dedicated to uniting community partners in order to equip students in grades 8 through 12, along with their families, with the knowledge, skills, and resources needed to ensure high school graduation and broaden post-secondary academic and career opportunities.

Launched in 2007, the program was born out of a survey conducted among Latino students and their families, revealing a critical need for a greater understanding of the educational system. Today, Juntos serves all students and families interested in enrolling, offering a comprehensive program that includes Family Engagement, 4-H clubs, Academic Success Coaching and an annual Summer Academy.

In recent years, Juntos has expanded to include workforce development initiatives, encouraging high school students to explore College and Career Pathways. The program’s success is made possible through the collaborative efforts of Extension’s 4-H and Family & Consumer Sciences (FCS) agents, K-12 school systems, post-secondary institutions, and community volunteers, creating a sustainable and impactful mode that continues to thrive in communities across the United States.

Background and Problem Statement

In North Carolina, the Juntos Program has expanded to approximately 21 school sites, each hosting weekly Juntos–4-H club meetings and periodic Family Engagement events. Participant attendance is currently documented using paper sign-in sheets on which students and family members write their names and signatures. Program coordinators at each school site scan these attendance sheets and upload them to the Juntos North Carolina State Leadership Drive. Juntos program assistants at the North Carolina State Office then manually enter participant information into a Microsoft Excel roster. This multi-step, paper-based process involves numerous handoffs before data coding is completed, resulting in inefficiencies, delays, and increased administrative burden for both site coordinators and program assistants.

Project Description

Build an attendance and signature collection tool for the Juntos Program to replace paper sign-in sheets and reduce manual scanning and spreadsheet entry.

  • Support creating and managing events across multiple school sites (e.g., weekly clubs and family engagement events).
  • Provide a digital sign-in workflow that captures participant information and an audit-acceptable signature/attestation.
  • Support roles and permissions for site coordinators and state office/program staff.
  • Provide basic review and correction workflows to improve data quality and reduce downstream manual work.
  • Generate exports compatible with existing reporting needs (e.g., CSV/Excel) and produce audit-friendly records per event.
  • Store and organize attendance records by event and site with simple search/filtering.

Technologies and Other Constraints

  • The solution must comply with NC State University’s IT accessibility requirements.
  • Students should use a modern technology stack that is popular, well-documented, and maintainable (to support rapid development, onboarding, and long-term handoff).
Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background 

The Laboratory for Analytic Sciences, LAS, is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information. 

Background and Problem Statement 

Data labeling and data validation are important components of building AI and ML models. At LAS, we built a customizable data labeling application, called Infinitypool, to support the assessment of data. Yet building high quality data sets with human data labeling is time consuming and costly and we need to create innovations to drive down the time and cost to building high quality labeled datasets. This is especially important for the image and video domains where there can be lots of similarity between adjacent images or frames. 

This project will build on the LAS labeling application Infinitypool to create efficiencies in labeling image data, especially image frames from video. 

Project Description 

The current Infinitypool application is task based, meaning that each task (image) is labeled one at a time, and by a single person. LAS has expansive datasets of frame capture from videos that do not easily lend itself to the one image/one label approach. We are looking to develop a new way to both present tasks for labeling and present multiple images (potentially in a tile display) to allow for multiple images/labeled in a given “task” and then integrating these updates into existing workflows across multiple data modalities. 

This work will include front-end UI development on an existing code base, API development, and back-end integration. We are currently using React for the front-end, the Infinitypool API service, developed using the Adonis.js framework, and we have Postgress for our database. 

Technologies and Other Constraints 

The team will have great freedom to explore, investigate, and design the labeling interface and interactions but will be subject to design and technology decisions in place with the existing Infinitypool application. However, any new methodology employed should not have any

restrictions (e.g. no enterprise licenses required). In general, we will need this technology to operate on commodity hardware and software environments, and only make use of technologies with permissive licenses (MIT, Apache 2.0, etc). Beyond these constraints, technology choices will generally be considered design decisions left to the student team. The LAS will provide the student team with access to AWS resources for development, testing and experimentation, including GPU availability for model training. 

ALSO NOTE: Public distributions of research performed in conjunction with USG persons or groups are subject to pre-publication review by the USG. In the case of the LAS, typically this review process is performed with great expediency, is transparent to research partners, and is of little to no consequence to the students.

Sponsor Background

NetApp ONTAP Performance Engineering helps maintain and improve ONTAP system performance measured using various metrics on a suite of NetApp hardware products. For conducting these tests, Performance Engineering operates and maintains multiple performance labs.

Background and Problem Statement

Performance labs are a highly automated environment used by NetApp engineering community to submit performance tests using just a few clicks and are made up of islands of data driving clients and NetApp platforms connected on the same switch to avoid network hops in these performance-critical tests​​. While one lab is dedicated to automated release performance testing on a regular basis, another enables analysts to submit tests on demand, and a third lab helps optimize ONTAP builds for performance. NetApp invests millions of dollars in these performance labs. Due to the demand for some of the platforms on which these tests run, submitted tests typically ‘wait’ for some time before they start running on the platforms. While this ‘wait’ time can be more deterministic in a lab where tests are submitted in an automated fashion, in the labs where users submit tests on demand, this time can vary a lot and could depend on multiple factors including the demand for a platform. In such cases, it would be helpful if, for a test being submitted, one could get an estimate of the time the test would start and the time at which it would end. This ‘ETA’ tool is what this project aims for.

Project Description

You are asked to build a tool that will generate the test start and end times, for a given test configuration, based on the historic data of the tests submitted on this platform, data belonging to tests with 'similar’ configuration, and other relevant information pertaining to the lab configuration. In addition to providing these estimates, the tool should also provide visualizations of the ‘wait’ times for a given test configuration and suggest alternative platforms on which this test could start sooner if it is feasible. This tool can be developed as a web-application, and ETA generation for a test configuration should also be made available via API querying.

Technologies and Other Constraints

Data

You can assume that the configuration data for any given test is a JSON file with keys being various test parameters that are strings, and the values can be numeric or string or other data type. Of various keys present in this JSON file, some may be more important than others, in estimating these start and end times. Although the keys can be well-defined and not change from one test to another significantly, the same cannot be said about the values they take. Assume that the JSON can contain about 200 such keys. The number of such tests can run into a hundred thousand over the course of one year for each lab. The historic data of a test can be a collection of the following: submission time, wait time, start time, and end time. Lab configuration can include the platform on which the test is supposed to run, the type of data generating clients used, the switch on which this platform and data generating clients are connected, the number of such platforms available on this switch, the number of such data generating clients available on this switch, the number of such equipment that are free at the moment a test is submitted: some of this information can be found in the test configuration itself whereas the rest could be sourced from a different location. You can consider each of these fields as a feature, rather they are grouped by test configuration, historic data, and lab configuration. Based on this information, the team is expected to generate data that closely follows this schema for historic data, which is numeric. Lab configuration and test configuration is part numeric and part categorical.

Analyses

You are required to work on three categories of models that estimate the start and end times: analytical, statistical, and neural networks based.

  • Analytical models rely upon the historical test wait and execution times, test submission patterns such as inter-arrival time, lab configuration (number of platforms and clients), queuing theory, test scheduling and prioritization algorithms. This requires some understanding of queuing theory. 
  • Statistical models rely upon the numeric information for prediction and pattern recognition. We are looking at some time series modeling here.
  • Neural networks can in principle rely upon all the information provided, for prediction. 

Note that not all data is useful. For example, in statistical modeling, you may want to identify key features that contribute to estimating the start and end times and improve the model to work on this limited number of features. In order to identify those features, you may do correlation analysis, principal component analysis, and other dimensionality reduction techniques. You may also want to try tree-based methods as they are known to work well with structured data.

In case of neural networks-based models, you may try those that help with time series, such as RNN, LSTM, or an SLM. You may also try other models if you are convinced they would help with the prediction.

For analytical models, you may wish to start with M/M/1 queues and then move to more complex models that better represent the lab configuration.

Application Requirements

  • Web interface should let a user create a test configuration by providing drop-down menus / text boxes corresponding to various keys.
  • After the user provides all the required information, the user clicks ‘Submit’, after which these estimates and visualizations should be displayed, along with the alternatives if feasible, and ask the user if they would like to continue with their test. The UI and workflow can be better than what has been described here.
  • The ETA generation for a test configuration should also be made available via API querying. This querying should support provision of test configuration as arguments, type of modeling, model hyper-parameters, etc.
  • Statistical and neural networks-based models should not overfit the data used for training those. Quality of these models should be demonstrated by use of confusion matrix for various splits of train / test / validation sets, and all these models should be evaluated in a consistent manner.
  • The application should be responsive.
  • The application should be self-contained and lightweight. Would you use a popular open-source tool for the UI, or would you design it by yourself? Would you need a license to use some packages or tools, and if so, can you do something similar on your own? Do you build things from scratch or use open-source tools? What are the tradeoffs? These are some questions that the team needs to consider.
  • The application should be portable. Can this application be containerized (Docker / Kubernetes, for example) and easily deployed elsewhere? 
  • The application should be designed and implemented in such a way that it is manageable by someone who is not an author of it. Let us say you ship this application to the end users and a couple of them are assigned to maintain this application. How would you design your application such that they could be onboarded quickly and with minimal effort? If you make use of some esoteric technology that only your team understands, will the maintainers be able to learn this technology? Or would you use something that most people are aware of and use it to serve the purposes of your application?
  • The data schema that the application works upon should be configurable and so should the parameters of the application. Magic strings should be abstracted out into a configuration file as much as possible. If you are given a new data source that follows the data schema provided, will your application be able to switch to that data in a straightforward manner? Will your models need to be retrained? Does your application provide scripts that help perform this training?
  • Visualization and analyses performed by the application should be comprehensible to the user. Remember not everyone is familiar with the scientific jargon; for example, the metrics that are typically associated with machine learning models such as precision, recall, and accuracy. Can you determine which of these metrics apply to which analyses and present them in a format, whether visual or textual, that a ‘typical' user can easily understand? Can you make the visualizations interactive and natural to follow?

Skillset 

Team will work on the following in this project

  • Data analysis
  • Analytical modeling
  • Statistical modeling
  • Neural networks
  • System design
  • Full-stack development
  • API development
  • UI/UX

Minimum Viable Product

A tool that provides start and end times for a test configuration via an API query and a web-interface that helps visualize relevant historical data for that test configuration. At least one statistical model and one neural networks-based model that works satisfactorily on the data generated by the team following the schema and other information provided earlier. This project is aimed to make the team work on estimation methods, API development, and some full-stack development. Analytical modeling and a more complex web-interface that lets user provide input are additional, but the team is strongly encouraged to plan appropriately and work on those as well.

Although some core principles are provided, the project is purposefully open-ended so that the team gets a chance to think through the details, discuss amongst each other, communicate their thoughts to the sponsor, receive feedback, iterate, develop, explore, and learn along the course of the project.

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background 

Don created ResumeFab to help people create resumes tailored to each specific job listing. 

He has been a serial entrepreneur who has grown companies from 0 to hundreds of employees multiple times. He’s looked at thousands of resumes and has hired hundreds of people. He has experienced the frustration of having to translate what is on a resume to see why the applicant thinks they are a fit for the job. 

Background and Problem Statement 

Many resumes are created to present general skills and experiences. As a result, job seekers must summarize their education, work history, and accomplishments without knowing which parts will matter most to a particular employer. Employers and recruiters then spend time translating a broad resume into an assessment of whether the applicant truly fits the role.

ResumeFab addresses this mismatch by retaining a structured database of a user’s background—such as education, projects, employment (with titles), specific roles, completed tasks, and even hobbies. Using this information, AI can generate a new resume for each job listing that emphasizes the experiences and skills requested in that listing. The goal is to enable job seekers to create a bespoke resume for every job listing they pursue, rather than repeatedly editing a single generic resume.

ResumeFab is currently running as a free application, but it lacks administrative tools that provide visibility into who is using the product and how it is being used. The sponsor wants a dashboard that allows administrators to view Key Performance Indicators with user details, and to control selected features within the software.

This need becomes especially important as ResumeFab moves toward charging users. The company has a Stripe account, and the codebase already includes integration with the Stripe API to execute transactions, but the sponsor does not want to enable billing without basic monitoring and controls. A dashboard that provides billing visibility, usage tracking, and safe administrative control over charging behavior is required before turning on payments.

Project Description 

A Fall 2025 Senior Design student team worked on the first iteration of ResumeFab (https://resumefab.com/) . They built a full-stack web app that lets users upload and parse an existing resume, enter a job description, and generate a tailored resume using an OpenAI-driven prompt pipeline (including markdown-structured prompts, one-shot prompting, and a multi-step verification approach). They also implemented core user features like multiple resume styles, a resume library/history with download/export, and a skill-mapping analysis against the job description, plus supporting backend APIs and testing infrastructure.

For Spring 2026, the student team will expand the existing basic administrator dashboard into a more robust, easier-to-use Control Panel that is responsive on both desktop and mobile devices. This Control Panel will allow ResumeFab administrators to view, and in some cases control, real-time and historical performance, billing, and usage data for clients.   

The project has two primary technical focuses:

  • Review and expand the current SQL database to improve visibility into product usage, and incorporate mechanisms (e.g., cookie-based identification where appropriate) to associate usage with user context for operational monitoring and reporting.
  • Build an administrator page that controls how and what users are charged so ResumeFab can experiment with different revenue models to optimize monetization, while maintaining the monitoring and controls needed for safe adoption of billing.

Technologies and Other Constraints 

  • Full-stack TypeScript web application (React frontend + Node/Express backend).
  • PostgreSQL database
  • Integrates OpenAI (AI resume generation) and Stripe (billing).
  • Constraints: build on the existing codebase; includes database construction/management and required reporting/integration needs (Fintec).
Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

SAS provides technology that is used around the world to transform data into intelligence. A key component of SAS technology is providing access to good, clean, curated data.  The SAS Data Management business unit is responsible for helping users create standard, repeatable methods for integrating, improving, and enriching data.  This project is being sponsored by the SAS Data Management business unit in order to help users better leverage their data assets.

Background and Problem Statement

An increasingly prevalent and accelerating problem for businesses is dealing with the vast amount of information being collected. Combined with lacking data governance, enterprises are faced with conflicting use of domain-specific terminology, varying levels of data quality/trustworthiness, and fragmented access. Lack of timely and accurate data reporting may end up driving poor decisions, operational inefficiencies, and financial losses. This is in addition to the exposure of businesses to regulatory penalties, compliance failures, and reputational damage and ultimately putting businesses at a competitive disadvantage. 

To help address the underlying issues related to managing their data, a business may buy, or build, a data governance solution that allows them to holistically identify and govern an enterprise's data assets. SAS has developed a data catalog product which enables customers to inventory the assets within their SAS Viya ecosystem. The product also allows users to discover various assets, explore metadata, and visualize how the assets are used throughout the platform.

In past semesters, SAS student teams have explored different avenues and extensions to a data catalog such as operationalizing governance and creating accessible components to visualize metadata. In this project, we'd like to focus instead on the discovery and, primarily, exploration phase. A data catalog provides a curated view of the metadata that is stored within the environment. This is with the intention to provide easy to understand dashboards and visualizations, about specific types of metadata, to a wide-spread audience. When displaying cataloged assets to an end user, a view of the metadata for a specific context or user persona is presented. The view will hide some of the metadata and the underlying complexity. 

And this is where we get to one of the weaknesses of visualizing a data catalog. The catalog provides a metadata system to support storing any type of metadata object, but displaying that information to end users (outside of just an API) in a useful way is difficult without an understanding of the context (including the business, domain, and user personas). 

What the sponsor would like to investigate in this project is a metadata explorer. An interface that gives the user unfiltered access to all the metadata in the environment (that they are authorized to view). If the catalog is akin to a traditional, physical library card system then the explorer is akin to walking through the bookstacks or shelves. An explorer opens up the possibilities for end users. If we're already providing an interface to view the metadata, then why not also be able to edit it? Or for example, once a user understands the underlying metadata then they could create a customized dashboard.

Project Description

Ingest Metadata

On startup, the application must ingest/load a pre-defined set of metadata (in a JSON format). An initial set of metadata will be provided by the sponsors as well as a script for the generation of synthetic metadata. All metadata provided will conform to the Open Metadata schema (https://docs.open-metadata.org/v1.4.x/main-concepts/metadata-standard/schemas).

Metadata Explorer

The application must provide an API to perform create, read, update, and delete (CRUD) operations upon metadata in the system. 

The application must provide an easy-to-use, performant interface for users to explore the metadata available in system. The students should brainstorm various approaches or UI patterns that could be used to allow users to visualize the metadata.

The application should provide the following functionality:

  • The ability to view and edit all of the attributes associated with a given entity.
  • The ability to see all classifications/relationships associated with a given entity
  • (stretch goal): Add new attributes to an existing entity.
  • (stretch goal): Create/add new relationship.
  • (stretch goal): Ability to filter based on attributes.

Out of Scope

The following are out of scope for this project:

  • User authentication/authorization

Technologies and Other Constraints

  • Any open-source library/packages may be used.
  • This should be a web application. While the design should be responsive, it does not need to run as a mobile application.
  • React must be used for UI.
  • For any GenAI usage, open-source models must be used.

Sponsor Background

Teen Health Research (THR) inc. is a startup dedicated to providing a program for parents and children ages 10 to 19 to inform and facilitate communication related to health and well-being. THR has developed an interactive web app for the program. 

Background and Problem Statement

Journaling has been a popular way to manage communication and mental health issues. The Teen Health Research platform includes a collaborative journal for teens to reflect on their challenges, excitements, and feelings and have a conversation with their parents who have their own journal to reflect their journey of relationships with their children. 

The main innovation in this project will be to develop guided journaling support with AI to engage users to do responsible recording of their experiences while in full control of how much and in what form parts of their experiences are shared with their family on either side. 

Project Description

The envisioned software solution will be a personal journaling web-app with the following features:

  1. Profile generation with rich set of (required and optional) profile parameters including demographics, interests, schedules, personality, temperament, among others).
  2. A set of standard templates that are approved by expert to prompt for journaling entries.
  3. A collection of language models with unit tests and benchmarks for conversations. 3 approaches to be implemented 
    1. RAG-based approach so the LLM only uses information from curated sources
    2. Open LLM only with prompt engineering constrained to focus on the topic
    3. LLM fine-tuned for counseling with relevant resources
  4. Ability to create Parent and Child profiles with a secure shared journal where there can be selective sharing of information and detailed conversations
  5. A Dashboard for tracking conversations and user engagement with a couple of ways to actively create more engagement (calendar integration, easy email-response posts, etc.)

Technologies and Other Constraints

Web-based and (as a stretch) mobile app. The conversation aspect can be prototyped as a Discord bot. To ease the transition to the main app framework for the existing Let’s Talk app 

Let’s talk app is deployed using Heroku and uses a MongoDB Atlas database.
Small to medium models running on our VCL machines with api access should be sufficient as a starting point. 

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

The ARNAV Lab is Dr. Jhala’s research group. They investigate computational structures and methods that are useful in representing and mediating human interpretation and communication of narrative in interactive visual media, such as film and games. The Jhala research group uses symbolic and probabilistic tools to represent and construct coherent visual discourse and apply generative techniques for automated and semi-automated tools to interpret and collaboratively create visual narratives.

Background and Problem Statement

The ARNAV Lab is creating a platform for simulating conversations between groups of people that have rich personalities, experiences, opinions, and expressions. While an original project was developed using the Unity Game engine, the visual management of character sprites and user interaction pieces are challenging to maintain and run user evaluations for due to the constant updates to the game engine. We are looking for a lightweight way to focus our study on the conversation aspect of the AI agents to simulate different types of community interactions

Project Description

Inspired by some of the Discord servers that have bots that are designed to help a Game Master run DnD simulations, we want to develop communities of AI agents and human participants to do rich simulations of communities and conversations over time.

In this project, we will be developing a framework that integrates with Discord to include:

  1. Server configuration for initializing and running a community of bots on a new “blank” server.
  2. Initializing bots into this server for simulated users that have profiles (goals, personality, background, visual look, communication styles, etc.) using AI models
  3. World building with locations (that are represented by channels) and overall narrative with the ability to schedule time movement (days, weeks, seasons) and events (hurricanes, sports events, etc.) based on timed messages to the community for response. Built-in chance (die rolls) for severity of events.
  4. A dashboard that shows community engagement for every bot and human entity that is interacting within a server.
  5. A demonstration world where you can see bots going about their daily lives through conversations and interesting events (This can be made into some type of RPG game as a stretch goal if the team is so inclined)

Technologies and Other Constraints

The framework will interact with Discord via its API using a well-supported client (e.g., Discord.js). Students can propose the language and paradigm for the framework itself, but a web-based platform is suggested. Use of an LLM and likely an image model will be necessary. 

Sponsor Background

Bandwidth is a software company focused on communications. Bandwidth’s platform is behind many of the communications you interact with every day. Calling mom on the way into work? Hopping on a conference call with your team from the beach? Booking a hair appointment via text? Our APIs, built on top of our nationwide network, make it easy for our innovative customers to serve up the technology that powers your life.

Background and Problem Statement

Any organization working with personal or confidential data requires tools that can remove sensitive information safely and accurately. Manual redaction processes are difficult to scale and can lead to errors. Bandwidth has an opportunity to provide automated, privacy-first tooling that aligns with our trust and compliance commitments.

Project Description

The AI-Redaction Service is a tool designed to automatically detect and remove sensitive information—such as phone numbers, emails, dates(ex. DOB), credit card numbers, and other Personally Identifiable Information (PII)—from call transcripts or audio. It enhances privacy and compliance for customers using Bandwidth’s call recording and transcription features. Students will build a text-based redaction MVP, with optional audio enhancements as stretch goals.

Objectives

  • Automatically detect and redact common PII in transcripts.
  • Provide structured output summarizing detected entities.
  • Build a simple, intuitive interface or API for redaction workflows.
  • Deliver a working demonstration suitable for internal and customer use cases.

Core Features (MVP)

  • Upload or paste transcript → receive redacted version.
  • Detect PII categories including: phone numbers, emails, credit card numbers, account numbers, timestamps, and other structured entities.
  • Replace sensitive elements with standardized tokens (e.g., [REDACTED_PHONE]).
  • JSON summary of detected items.
  • No customer data required; supports synthetic transcripts.

Success Criteria

  • High accuracy for common structured PII.
  • Maintains readability of redacted transcripts.
  • Low false positives and minimal missed detections.
  • Demonstrates value for Bandwidth customers and internal teams.
  • Clear path for productization or integration into existing tools.

Stretch Features  

  • Named Entity Recognition (NER) model for names, addresses, and contextual PII.
  • Audio redaction (mute or bleep sensitive portions).
  • Integration with Bandwidth Recording or Transcription API.
  • Real-time or streaming redaction.
  • Customizable redaction rules.

Technologies and Other Constraints

Technical Approach

  • Backend API for detection, redaction, and output formatting.
  • Two-layer pipeline:
    • Regex + rule-based detection for structured PII.
    • ML/NER model for contextual PII (optional).
  • Simple web UI for upload and visualization (React, Streamlit, or HTML/JS).
  • Synthetic test data for evaluation.
  • Clear explainability and auditing of detected entities.

Expected Deliverables

  • Functional text-based redaction service (API or UI).
  • Documentation of detection logic and patterns.
  • Sample transcripts and synthetic test suite.
  • Demo of redaction workflow.
  • GitHub repository with setup and usage instructions.
  • Optional: audio redaction and/or enhanced NER layer.
Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

This request is part of an ongoing project within the Civil, Construction and Environmental Engineering (CCEE) Department, sponsored by the Alaska Department of Transportation and Public Facilities (AKDOT&PF). As the most seismically active state in the United States, Alaska faces unique infrastructure challenges. AKDOT&PF has been supporting NC State research focused on enhancing the seismic safety of bridges for over 20 years.

Background and Problem Statement

After a damaging earthquake, it is critical to quickly determine the status of civil infrastructure, such as highway bridges.  This helps state agencies make informed decisions, avoid unnecessary risks, and reduce potential losses. In seismic regions, bridges play a vital role after an earthquake by serving as lifelines, providing access for emergency vehicles and helping reconnect isolated communities.

However, assessing the condition of dozens or even hundreds of bridges immediately after an earthquake is a challenge, especially in states where bridges are spread across vast and remote areas, which is the case of Alaska. Figure 1 shows the transportation network (bridges as circles) overlayed with the intensity of the seismic hazard (darker colors represent a bigger hazard). Sending engineers to every bridge site for inspections can take days or weeks, time that might delay critical emergency response efforts. 

The focus of our research project is to develop a rapid and practical method to evaluate the post-earthquake performance of bridges. This project also looks beyond post-earthquake response. The same type of analysis can be used to run scenarios before an earthquake happens to identify vulnerable bridges in advance, improving emergency planning and even informing better design choices.

Figure 1. Location of AKDOT&PF bridges and the identified hazard level for different regions in Alaska.

Project Description

A Computer Science Senior Design team in Fall 2025 developed a full-stack web application designed to assess bridge structural integrity in response to earthquake events. The system fetches real-time earthquake data from USGS, performs structural assessments on bridges using scientific models, and provides a web interface for monitoring and management.

Key Features:

  • Real-time earthquake data ingestion from USGS
  • Automated bridge structural assessments
  • User authentication and role-based access control
  • Interface map-based visualization
  • Admin dashboard for managing bridges, users, and seismic stations

While Phase I of Rapid Post-Earthquake Assessment Tool enabled the completion of key components, the following tasks (Phase II) are recommended to further develop the system and prepare it for use by engineers in post-earthquake decision making. 

The items listed below represent a comprehensive wish list rather than a fixed scope. Same tasks are expected to be relatively straightforward to implement, while others will require more significant development effort. They vary in complexity and level of effort, and priorities can be discussed and established with the student team at the start of the project.

 

  1. Offline Functionality
    It is essential for AKDOT’s field engineers, who often work in areas with limited connectivity. Implementing this would require key features, such as adding support for a service worker for caching, utilizing IndexedDB to store network requests, and implementing synchronization logic to send or update data when a connection is established.
  2. Email Notifications
    The system should send email notifications to users when an earthquake assessment is completed. Notifications should only be triggered when the earthquake magnitude exceeds the user-defined threshold. Each notification email should include the inspection priority list as a PDF attachment to allow timely review by engineering staff.
  3. O-Auth 2.0 Integration
    Currently, the application handles basic user management and authentication. For production use, however, it will need to be integrated with OAuth 2.0 and AKDOT’s Microsoft Entra ID system. We recommend maintaining two separate authentication models: one for production and one for local development. Because NCSU’s civil engineering team will not have access to AKDOT’s Microsoft accounts, local user management and authentication will still be needed for testing and development. In production environments, when deployed to AKDOT’s Azure infrastructure, AKDOT&PF employees will authenticate using their Microsoft accounts. This would also align with future development plans for email notifications when bridge inspection priority lists are generated after an earthquake.
  4. Revise Landing Page
    Revise the landing page such that it includes: a user guide describing how to operate the tool, primary contact email, frequently asked questions section, inventory map displaying the bridge network, and display of the last 10 earthquakes exceeding a predefined magnitude threshold within the state.
  5. Include Bridge Database Page
    A page similar to the “Bridges” in the Settings page, however, where we can see a list of all the bridges and their most important information (internal ID, type, columns per pier, number of spans, etc.), alongside with an inventory map.
    • Easy access at the same navigation level as Earthquakes and Simulations.
  6. Individual Bridge Page Enhancements
    Enhancing the individual bridge view by:
    • Presenting system-level values (e.g., system displacement, total base shear, effective stiffness etc) in a structured table format.
    • Displaying key plots produced by the analysis code (non-earthquake dependent code).
    • Enabling export of individual bridge non-earthquake dependent data (table and figures).
    • Adding a link to earthquake dependent performance data for each bridge. While summary results are available in the inspection priority list, this feature will provide access to more detailed event-level information when needed.
  7. Assessment Page Improvements
    Improve the assessment interface to allow:
    • Export the inspection priority list (PDF and CVS formats).
    • Filtering of results by bridge attributes (e.g., type, number of spans, number of columns).
    • Visualization of the number of bridges per damage category (“color” distribution).
    • Overlay of USGS ShakeMap data on the bridge assessment map.
    • Clickable map features linking to individual bridges page.
    • Incorporation of new workflow (to be provided by the CCEE research team) that enables the selection of more complex ground motion models, the automated calculation of required new input variables and the display of the resulting response spectrum (i.e., the seismic demand) 
    • Incorporate comparison tools for assessments ran using different ground motion models.
  8. Stations page:
    • Communication with ground motion data centers (e.g., the Alaska Earthquake Center) to retrieve processed ground motions.
    • Add interface to display available recorded ground motions (e.g., acceleration, velocity, and displacement time series).
  9. Configurable Assessment Logic
    Provide access for advanced users (e.g., research team only) to modify assessment logic from within the application.
  10. Updated Assessment Algorithms
    Integrate recent upgrades to the assessment logic that were developed in MATLAB by the research team, translating and implementing them in Python. Examples of this include:
    User Flexibility
    Enable admin users to modify key boundary conditions and limit-states values, which are currently fixed in the default assessment logic.
    Frozen Soil Effects
    Add an option within the Assessment page to include or neglect the effects of frozen soil conditions. The goal is to better represent bridge behavior in cold regions.
    • This option will directly affect calculated bridge response and update the inspection priority list. 
  11. Adaptable Bridge Database Structure
    Account for different column types (RC vs. RCFST) by allowing the database structure to adapt accordingly.
    Provide the option to select between:
    • A complete database (full input parameters, higher fidelity. What we currently have).
    • A simplified database (fewer required inputs for faster population of database. Consequently, faster deployment).
  12. Bridge Database Expansion
    Provide a mechanism for large-scale database updates without requiring manual data entry through the application interface. This may include importing bridge information using Excel or CSV files, enabling efficient population and maintenance of the bridge inventory.

Technologies and Other Constraints

Technologies to use are based on what Phase I of the project seemed appropriate and what Phase II may require for improvements. Recommendations stemming from the completion of Phase I of the project include becoming familiar with the available User Guide, Developer’s Guide, and Installation Guide to see the details of Phase I. 

Resources used during Phase I:

  • Python
  • FastAPI
  • SQLAlchemy
  • MySQL
  • Docker
  • Nginx
  • React
  • Vite
  • TypeScript
  • Tailwind CSS
  • Axios
  • USGS API
Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

Decidio uses Cognitive Science and AI to help people make better decisions and to feel better about the decisions they make. We plan to strategically launch Decidio into a small network fanbase, then grow it deliberately, methodically and through analytics into a strong and accelerating network.

Background and Problem Statement

Most landing pages explain. We want to create one that invites interaction — and rewards discovery. 

Decidio is developing a decision-intelligence platform that helps people make better choices with clarity and confidence. As awareness grows, the goal is for the brand to teach itself — not through long paragraphs, but through a playful, visual experience people can’t resist touching.  

Project Description

Students on this project will build a dynamic, game-like visualization — based on the Decidio logo system — that responds to user actions, hides solvable logic beneath the surface, and quietly converts curiosity into early-access signups. Visitors don’t just see Decidio. They explore it, experiment with it, solve it — and talk about it.

This isn’t meant to be “a game.” It’s an interactive brand teaser that communicates value through interaction — while generating measurable early-engagement signals. It blends design, psychology, product strategy, and engineering — exactly the kind of challenge that prepares a senior team for real-world environments.

Engaging core mechanic

  • A constellation of animated dots that invites interaction.
  • Hidden rules connect those dots in subtle, meaningful ways.
  • Every hover, click, and drag creates movement, ripples, clusters, or alignment.
  • Visual cues hint when actions move the system toward order.

When players finally “get it,” they unlock an invitation-only early access. The result is:

  • memorable
  • sticky
  • inherently shareable
  • psychologically rewarding (curiosity → mastery → payoff)

Success metrics 

We would like all these interactions to be instrumented so that it becomes a living experiment, allowing A/B testing of rules, difficulty, and feedback loops.

  • % of visitors who interact 
  • average interaction time 
  • % reaching win states 
  • conversion to signup after winning 
  • referral/viral behavior signals (return plays, shared links) 

Beyond the numbers, the qualitative goal is simple: People should leave thinking, “That was fun — and this company thinks differently.”

Built-in virality

The experience is intentionally mysterious — and social. “I figured out how to unlock it — can you?” That creates natural sharing behavior:

  • friends challenging friends
  • posts about “secret” invite puzzles
  • players explaining strategies
  • repeat visits to try again

Over time, rewards can expand to include vendor perks or feature privileges — deepening viral loops without changing the mechanic.

Stretch Goal: Companion mobile app

A companion mobile experience mirrors the same system:

  • shared backend + rules engine
  • consistent visual language
  • notifications for rewards and new puzzles

This pushes the team to think cross-platform — while staying scoped and realistic.

Technologies and Other Constraints

Typescript, JavaScript, CSS, HTML, Node.js using React and possibly React-Native frameworks (for the mobile app). Auto-layout uses physics-based force-directed mechanics. Students are encouraged to explore D3.js-based libraries.

Front-end / Interaction 

  • high-performance animated visualization 
  • fluid pointer + touch interactions (hover, drag, tap, gesture) 
  • responsive layout across screens and devices 

Rules & Game Logic 

  • configurable rules engine controlling relationships and states 
  • feedback cues (hints, micro-animations, reinforcement) 

Backend Services 

  • secure invite + early-access ticket pipeline 
  • user interaction logging 
  • API layer supporting both web and mobile 

Analytics & Experimentation 

  • event instrumentation 
  • dashboards summarizing engagement 
  • support for A/B testing behavior variations 

Mobile Companion (Stretch Goal) 

  • mobile UI mapped to the same rules engine 
  • onboarding + win notifications 
  • synchronized user progress 

Ethics & Accessibility 

  • reduced-motion option 
  • color-safe design 
  • transparent reward messaging 

The project naturally splits into phases, minimizing risk and creating clear milestone reviews.

Sponsor Background

FreeFlow Networks is a startup pursuing innovative technologies for potential commercialization. One current project is Whisker Wings, a flight-based game centered on energy management, environmental interaction, and skill-driven play. This project supports the long-term scalability and quality of content creation for the game.

Background and Problem Statement

Physics-based flight games present unique challenges in level design. Unlike traditional platformers or action games, flight games have to obey strict physical constraints such as turn radius, reaction time, speed envelopes, and other limitations. Even small level design errors can result in levels that are technically impossible, unfair, or frustrating to play.

Hand-crafting large numbers of high-quality flight levels is time-consuming and is prone to human error. Yet the expectation of mobile game consumers today is long play time across many bite-size levels.  As a result, a small game development team may struggle to scale level creation to meet consumer demand without sacrificing quality.

Fully procedural level generation, while it sounds good in theory, can produce level layouts that are boring (not fun), violate needed game constraints or require extensive correction from a human designer. 

The motivation for this project is to create a designer-controlled, constraint-driven level authoring tool that speeds up level creation while guaranteeing the level is playable. The system should use specific flight level design knowledge and place it directly into the tool, allowing a designer to generate and iterate on levels more efficiently and without fear of common level design failures.

Project Description

The proposed solution is an in-engine level authoring tool that uses procedural assistance constrained by flight-specific rules. Instead of replacing human designers, the tool assists designers by generating level layouts that already respect known limitations/constraints.

Designers will specify their intent for the level such as aircraft class, difficulty, skill focus, which obstacles/enemies, and the amount of them. Based on these inputs, the system will generate a bounded flight corridor, place checkpoints and obstacles, other challenging events, and validate the resulting layout against a set of hard constraints. If constraints are violated, the system must clearly report the failure or regenerate the level.

Example use cases include:

  • Rapidly generating candidate layouts for new levels
  • Creating side challenges or time-trial courses
  • Ensuring consistency across difficulty tiers
  • Preventing common design failures before playtesting

The resulting tool will significantly reduce iteration time while improving overall level quality.

Technologies and Other Constraints

Existing Environment:

  • Students will be given access to a github repository for an existing game prototype built in the Godot game engine.  The prototype includes a basic level, aircraft, and obstacles.  At this stage, the prototype is extremely basic with simple flight physics and controls implemented, and basic level editing extensions to the Godot level editor.  Students are welcome to propose changes to the prototype as needed to achieve their project objectives.  
  • Students will be given a copy of the game design document for Whisker Wings so that they can understand the intention of the ultimate game, enabling them to think creatively about level designs that can be generated.

Technical Constraints:

  • Game Engine: Godot 
  • Programming Language: 
    • GDScript required for in-game code.  
    • GDScript (preferred) or C# for level editor code.
  • Level Editor Paradigm: 
    • Tooling for level-generation will preferably be exposed to the game designer inside the Godot Engine editor so that the designer can tweak the level as desired (tooling is not meant to be player facing). 
    • It would also be acceptable to build side tooling that generates levels that could then be opened by the game designer in Godot Engine level editor. 
  • Procedural Generation Techniques: 
    • Constraints should be implemented to ensure generated levels are playable, meaning the player can successfully navigate them in-game.  This will require understanding of how in-game flight controls and physics work.  
    • Level generation should not be entirely random, instead conforming to specific types of levels (ex: time challenge course, ring challenge course).  Each level type likely needs “assembly rules” to ensure the level designed indeed plays as the intended level type.
  • Existing Game Code & Assets:
    • Addition of new game assets or modification to existing assets (ex: course obstacles) is allowed.  Please discuss changes with the sponsors.
    • Modification of the core flight physics and control system is not allowed.  If a change is needed please speak with the sponsors before proceeding.
Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

The Laboratory for Analytic Sciences, LAS, is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information.

Project Description

The idea is to create an AI agent based framework to play the game Diplomacy in the spirit of the following ArXiv paper: https://arxiv.org/abs/2506.09655. We would want to build a framework to run an AI agent system to play the game diplomacy and the work will also include a UI to swap models for playing the game, the framework to run the AI agent system, and analysis and visualizations to assess how the models decide on a strategy, interact, make decisions, and to characterize the overall performance of each model.

Sponsor Background 

Dr. DK Xu is an Assistant Professor in the Department of Computer Science at North Carolina State  University. His research focuses on AI system design, large language models (LLMs), multimodal  reasoning, and agentic AI systems for scientific and engineering applications.

He is joined by Dr. J. Paul  Liu, a Professor in NC State’s Department of Marine, Earth and Atmospheric Sciences (MEAS) and  Director of International Affairs in the College of Sciences, whose expertise in coastal and marine geology,  including continental shelf sedimentation, seismic profiling, and sea-level rise, provides strong domain  grounding for ocean and coastal data workflows.

Dr. Ruoying (Roy) He is a Goodnight Innovation  Distinguished Professor in MEAS and leads the Ocean Observing and Modeling Group; his expertise in  physical oceanography, ocean observing systems, and numerical modeling closely aligns with the scientific  use cases and validation needs of an AI-enabled ocean science assistant.

The project is further supported  by graduate student mentor Bowen (Berwin) Chen, who will provide hands-on technical guidance on  system integration, backend implementation, and development best practices throughout the project. 

The OceanVoice project builds on the development of Ocean AI (https://oceanai.ai4ocean.xyz/), an LLM-powered AI science assistant designed to support oceanographic data exploration, analysis, and  visualization. Ocean AI can process multimodal inputs and outputs, including text, tables, numerical  datasets, and plots, to assist researchers, students, and domain users in scientific workflows. This senior  design project will extend Ocean AI with a voice-first interaction layer, enabling users to interact with the  platform through spoken queries and receive responses through both voice and on-screen multimodal  outputs. The project emphasizes robust system integration, task execution, and human-in-the-loop  interaction rather than model training. 

Background and Problem Statement 

The current Ocean AI platform provides advanced capabilities for text-based data retrieval, analysis, and  reasoning. However, interaction is limited to typed input, which introduces friction in many realistic  scientific workflows such as exploratory analysis, collaborative discussions, teaching demonstrations, and  hands-busy or mobile environments. 

Spoken input offers a more natural interface but introduces new system challenges:

  • Voice queries are often less structured, more ambiguous, and may implicitly contain multiple steps.
  • Users may omit critical parameters such as time range or region. 
  • Certain actions (e.g., exporting large datasets) require confirmation to avoid unintended execution. 

Without explicit system support for grounding spoken input, executing tasks safely, and  interacting with users to resolve ambiguity, a voice interface can easily lead to incorrect or misleading results. For a next-generation multimodal LLM-powered ocean science assistant, supporting voice  interaction is not simply a UI feature; it requires careful design of intent understanding, controlled task  execution, and transparent output presentation. Addressing these challenges will significantly improve the  usability, robustness, and real-world applicability of the Ocean AI platform. 

Project Description 

The OceanVoice senior design team will develop a voice-enabled interaction module that can be  integrated into a simplified version of the existing Ocean AI platform. The system will allow users to  issue spoken scientific queries, execute corresponding data retrieval or analysis tasks, and receive results  through combined voice and visual outputs. To ensure feasibility and clarity, the project is structured  around three progressively advanced capability levels. 

  1. Voice Input to Scientific Request Execution 
    • Speech-to-text processing for scientific queries 
    • Mapping spoken input to structured, executable requests (e.g., variable, region, time range,  operation) 
    • Execution of a single retrieval or analysis task 
    • Spoken summary of results plus a visual output (number, table, or plot) 
  2. Simple Multi-Step Task Execution 
    • Parsing spoken input that implies two-step workflows (e.g., retrieve → plot, analyze → summarize) 
    • Sequential task execution with intermediate result passing 
    • Combined multimodal output including plots or summaries with a brief spoken explanation
  3. Interactive Clarification and Confirmation 
    • Detection of missing or ambiguous parameters in voice input 
    • System-initiated clarification questions (e.g., asking for year or region) 
    • Confirmation prompts for high-impact actions such as data export 
    • Transparent display of final execution parameters and data provenance 

The platform will be delivered as a web application integrated with a simplified Ocean AI backend,  emphasizing system correctness, usability, and transparency. 

Representative Voice Interaction Scenarios 

To clarify the scope of the system and guide implementation and testing, the project defines a small set of representative voice interaction scenarios associated with each capability level. These scenarios serve as  concrete targets for system design and will also be used as structured demo cases during the final  presentation.

Level 1: Voice Input to Single-Task Execution 

Level 1 focuses on reliable speech-to-intent grounding and execution of a single scientific task. 

  • Scenario 1: Basic Data Retrieval
    Voice input: 
    “Show sea surface temperature in the Gulf of Mexico in twenty twenty.”
    The system transcribes the spoken query, extracts the target variable, region, and time range,  maps them to a structured request, and executes a single data retrieval task. The response  includes a short spoken summary and an on-screen result (e.g., a table or plot). 
  • Scenario 2: Simple Statistical Analysis 
    Voice input: 
    “What was the average chlorophyll concentration near Hawaii in two thousand eighteen?”
    The system identifies the implicit analysis operation (“average”), executes the corresponding  computation, and returns a numeric result with units, accompanied by a brief spoken  explanation. 

Level 2: Simple Multi-Step Task Execution 

Level 2 extends the system to support spoken queries that imply a short sequence of actions executed in a  fixed order. 

  • Scenario 1: Retrieve and Visualize
    Voice input: 
    “Get sea surface temperature in the Pacific from twenty ten to twenty twenty and plot it.”
    The system decomposes the request into two steps, i.e., data retrieval followed by visualization,  and executes them sequentially. The final output includes a plotted trend and a spoken summary  of the result. 
  • Scenario 2: Analyze and Summarize
    Voice input:
    “Analyze temperature trends in the Arctic after two thousand and summarize the results.”
    The system performs a trend analysis on the relevant data and then generates a concise textual  summary, which is also read aloud to the user. 

Level 3: Clarification and Confirmation for Robust Interaction 

Level 3 introduces basic interaction awareness, allowing the system to engage users when information is  missing or when an action requires explicit confirmation. 

  • Scenario 1: Missing Parameter Clarification
    Voice input: 
    “Show temperature trends for last summer.” 
    The system detects missing or ambiguous parameters (e.g., year and region) and asks a follow-up  question to clarify the user’s intent before executing the task. 
  • Scenario 2: Confirmation for High-Impact Actions
    Voice input:
    “Export all chlorophyll data for the North Atlantic.”
    Before proceeding, the system prompts the user to confirm the action. Upon confirmation, the  system executes the export and clearly displays the final parameters used.

Technologies and Other Constraints 

Front-End Development 

  • Web-based interface built with React, Vue, or equivalent frameworks 
  • Microphone-based voice input and audio playback 
  • Display of speech transcripts, task steps, and results 
  • Visualization components for plots, numeric summaries, and provenance information

Back-End Development 

  • Integration with speech-to-text and text-to-speech services 
  • APIs for intent grounding and task execution 
  • Lightweight controller for sequential task orchestration 
  • Logging of interactions, task steps, and execution parameters 

Database Design 

  • Storage of interaction logs, including transcripts, structured requests, and results • Optional storage of user confirmations and clarification steps 
  • Support for exporting logs for debugging and evaluation 

Constraints include ensuring system robustness, avoiding unintended task execution, and maintaining  clear separation between user input, system decisions, and final outputs.

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

Teen Health Research (THR) inc. is a startup dedicated to providing a program for parents and children ages 10 to 19 to inform and facilitate communication related to health and well-being. THR has developed an interactive web app for the program. The “program” is a theoretically sound and empirically evaluated framework of a series of modules related to topics such as dealing with curiosity about bodies and changes to them, hygiene, and norms and expectations for relationships and healthy behavior. The app allows parents and children to create profiles and walks them through activities and informational materials step-by-step through the modules of the program.

[IMAGE]

Background and Problem Statement

To take Let’s Talk app from prototype to deployed version with the potential to support 10s of families to smoothly use the platform and add premium features. The basic app is running and has been tested with a small number of focus group users. It is currently in closed beta.

Project Description

 The objectives for this semester’s project are:

  1. Implement authentication through Google/Meta/Microsoft accounts and transition out of self-managed passwords
  2. Implement user profiles to include/exclude program modules based on subscription levels. A number of modules are already available on the app. Each module contains informational steps that include vocabulary terms, short animation clips of scenarios, motivational sayings, quizzes of various types, and reflection questions.
  3. Research and add a payment processor (similar to Square or Toast) for one-time payments and subscriptions

Technologies and Other Constraints

The Node.js-based app is deployed on Heroku. The content is drawn from Storyblok CMS via API. Storyblok has custom modules for content development. MongoDB database contains app data such as user profiles. Current app is available at http://go.lets-talk-app.com

http://lets-talk-families.com provides an overview of the program and app.

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

Dr. Card is a teaching professor in the Computer Science department at North Carolina State University whose teaching focuses on game design and development courses in the game development concentration of the Computer Science degree.

Background and Problem Statement

Video game design and development is a nexus point of various disciplines, with individuals of various backgrounds from different fields combining their talents and expertise to create a world and interactive experience. With the growing popularity of digital games as a form of entertainment, an art form, and teaching tool, the need to educate students in game design has grown. Students from varying disciplines and fields of study take game design courses, meaning instructors cannot rely on students prepared by a single disciplinary foundation (such as Design or Computer Science). This increase in the number and diversity of students, especially those not in programming related disciplines, necessitates an improvement in the available support tools for students in computer science-based game design courses.

Project Description

Currently, one tool used to teach game design is Puzzlescript, which has not been in active development for several years, and has some idiosyncratic behavior. This project aims to introduce a new rule-replacement programming tool similar to PuzzleScript with additional features which are not present in the current PuzzleScript implementation.

The tool should permit students to create 2d games using a rule replacement language similar to PuzzleScript. The tool should include a code editor with Syntax highlighting, an execution window to run the game, and the ability to compile and package a build of the game to be played elsewhere.

There are a few requirements on what the new tool should include:

  • Rule replacement based programming grounded in formal logics
  • Real-time execution instead of discrete time steps
  • Continuous or near continuous space instead of a discrete tile grid

The tool would be expected to be usable in the classroom environment for teaching and game creation purposes.

Technologies and Other Constraints

The students may propose various technologies to use in facilitating the creation of the tool. The end tool should align with NC State privacy standards, and should not store any remote information from a student.

Sponsor Background

Decidio uses Cognitive Science and AI to help people make better decisions and to feel better about the decisions they make. We plan to strategically launch Decidio into a small network fanbase, then grow it deliberately, methodically and through analytics into a strong and accelerating network.

Background and Problem Statement

 When users are making a sequence of decisions to purchase a collection of products like their wardrobe or fixtures when they are remodeling their home then each individual decision is not independent in terms of visual and structural features. For example, while choosing light fixtures for a kitchen, the recessed lighting, pendant lighting, dining table lighting, and studio lights for wall art are all chosen with the overall theme of materials, sizing, style, form, color, etc. Users making these choices get overwhelmed with such decisions because each one must not only match the overall theme but also to work well with other choices that are made. Sometimes there is an inversion of preferences as well. For example, one might not like red colored cars in general, but they really like red color on a particular car (such as a Mustang or Corvette). In this case, there is an exception to their preference for a single item from the collection. 

With an innovative visual interface that allows users to navigate their preferences by switching between left brain and right brain interactions (photos vs tables of numbers), Decidio seeks to make the process of discovering, modeling, and interacting with user preferences more pleasurable.

Project Description

The preference learning project aims to develop a machine learning model that models preferences in both the visual domain (pictures of items) and in the features domain (tables of numbers with feature labels). For this project, we want to build an initial database of products with their features and their photos. Then from the app interface, users will search or select products from either a gallery of images that appear while swiping right or the collection of features that appears while swiping left. Each user will have a profile with a named collection of products that they can add to a list of related products. Users indicate their preferences by reordering images or selecting features to narrow down suggested products. They can add their selected products to lists. 

Expected features of the app:

  1. A database of scraped products set within a couple of categories (Decidio already has a few items for lighting design and fountain pens that are scraped. This can serve as a starting point)
  2. A visual interface for displaying groups of product images for comparison and selection and separately for displaying product features. An existing prototype of this interface will be useful to see as a starting point.
  3. A set of preference models with some test users and benchmarks for validating the accuracy and consistency of these models on the given product categories.

There will be two machine learning models that will get trained from data collected from this interface:

  1. An individual preference model that sequentially updates products based on user input of preferences
  2. A separate model that evaluates the “goodness” or “coherence” of a given list of items

Technologies and Other Constraints

There are no technical constraints on this project.

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

Dr. Renee Harrington is an Associate Teaching Professor and Director of Undergraduate Programs in the Department of Health and Exercise Studies at North Carolina State University. Her teaching and research focus on various facets of health and wellness including nutrition, resilience, stress management, and physical activity promotion. 

Background and Problem Statement

At NC State University, as with many other college campuses, students commonly struggle with poor eating habits that contribute to fatigue, stress, reduced academic performance, and long-term health risks. Traditional nutrition education can feel overwhelming, overly technical, and disconnected from students’ everyday routines, resulting in low engagement and limited behavior change.

Although many students want to improve their eating habits, they often lack a low-stakes, positive, and judgment-free way to explore nutrition concepts. Without engaging, hands-on learning opportunities, students have few avenues to build confidence and practical skills around balanced eating. This gap underscores the need for an approach that not only teaches nutrition principles but also integrates guidance seamlessly into students' everyday campus routines. 

To address these challenges, students need a dynamic, personalized, and student-centered learning experience, one that allows them to experiment, make choices, and receive feedback in a supportive, risk-free environment. A platform that frames nutrition as approachable and enjoyable has strong potential to boost awareness, motivation, and day-to-day healthy decision-making. An enhanced system that incorporates personalized feedback and immersive features such as guided decision-making, avatar-based exploration, or real-time campus-specific prompts could meaningfully improve students’ ability to adopt and maintain healthier eating habits.

Project Description

This project will expand upon the work initiated during the Fall 2025 semester, which established four narrative-based scenarios focused on making healthy, balanced choices in NC State dining halls. Building on this foundation, the team will further develop an interactive, game-based platform designed to improve nutrition literacy and healthy decision-making among NC State students.

The platform will engage players through scenario-based and other challenges that reflect real eating situations on and near campus. By making food choices, completing tasks, and observing the outcomes of their decisions in a risk-free environment, students will gain practical knowledge about balanced nutrition and how to apply it in daily life. Real NC State dining locations, meal options, and student preferences will be integrated to create a personalized learning experience. Through puzzles, mini-quests, and branching decision paths, users will receive immediate feedback reinforcing core nutrition concepts such as portion balance, nutrient density, and long-term effects of dietary habits.

A key strength of this project is its alignment with NC State’s Health and Exercise Studies GEP, which requires every undergraduate to complete a 100-level HES course. This platform will be implemented directly within these required courses, embedding the game into the nutrition module as an experiential learning tool. Faculty will incorporate game data such as choices made or modules completed into class discussions and/or reflection assignments. As a result, the product will positively impact more than 3,000 students each semester, ensuring broach reach and meaningful influence on student well-being. 

Future enhancements may include more immersive features such as customizable avatars, campus-navigation gameplay, or adaptive nutrition coaching based on individual decision patterns. These additions would deepen engagement and strengthen the connection between virtual learning and real-world behavior.

Benefits to End Users:

  • High engagement: A game format promotes active learning and sustains student interest.
  • Practical application: Students practice nutrition decision-making using real campus dining options.
  • Personalized experience: Tailored to NC State’s environments and student preferences and lifestyle patterns.
  • Accessible and inclusive: Provides a non-intimidating, self-paced entry point learning and practicing nutrition concepts.
  • Large-scale, lasting impact: Integrated into required GEP courses, influencing thousands of students each semester and supporting long-term healthy habits that extend beyond the classroom.
  • Open-access potential: Open-source design allows adaptation and implementation at other institutions, extending the platform’s benefits to a wider population of students.

Technologies and Other Constraints

Students working on this project do not need prior nutrition knowledge or expertise; all subject-matter content will be provided. The team has flexibility in selecting technologies and development paradigms, as long as the final product is user-friendly, accessible, and feasible within the semester timeframe. The platform should be designed with sustainability in mind, favoring technologies that are low-cost to maintain and compatible with university-supported hosting environments.

It is preferred that the tool has the following:

  • Adhere to NC State’s IT security, accessibility, and data privacy standards. 
  • Scenario introduction: Provide a welcome or introductory screen for each scenario, using text, images, and other media as needed.
  • End-of-simulation feedback: Display user performance or a grade summary upon completion of each simulation.
  • Multiple simulations: Support running several scenarios sequentially without requiring the user to exit and re-authenticate. 
  • If LMS integration is included: Authentication should use NC State credentials and enable secure export of grades, choices, and other relevant metadata to a database for faculty review or LMS integration

The platform should be developed as an open-source simulation tool that initially supports the nutrition module in HES courses but is designed for scalability. Its modular architecture will allow additional scenarios, immersive features such as avatars, campus navigation, or adaptive nutrition coaching, and potential adoption by faculty in other departments or institutions with minimal redevelopment. This ensures the tool can reach thousands of students at NC State and potentially many more across diverse campuses.

Sponsor Background

Impartial is a criminal justice nonprofit. We exist to build meaningful human connections and programs that improve the criminal justice system through personal and community-driven engagement. Impartial believes that one of the ways to do that is by engaging the future justice leaders in games that can help them to better understand what the US justice system, what role they could play in it and most importantly, what the system could be by using gaming to understand possibilities.

Background and Problem Statement

Impartial has built nine criminal justice video games to date: Investigation, Grand Jury, Plea Deals, Motions to Dismiss, Jury Selection, Prosecution, The Defense, Jury Deliberation, and Sentencing. Your challenge is to develop the tenth and final game in the Justice Well-Played series: Post-Verdict. Seven games have been developed through the NCSU Capstone project, creating valuable assets we can share across games. For consistency and efficiency, we're using the same characters, names, and scenes throughout the series. Post-Verdict concerns the post-verdict appeals and sentencing path in a federal criminal case. Another Senior Design team will be working to consolidate previous titles in the series together into a cohesive whole while you are working on this game.

Project Description

Below is an outline for the Post-Verdict game. There are three possible outcomes going into the game:

  • Not Guilty: Case ends with acquittal and celebration.
  • Hung Jury: Prosecution decides whether to retry the case or dismiss charges.
  • Guilty Verdict: Defense must decide on post-conviction strategy.

After the verdict has been issued, the player is then presented with Post-Conviction Defense Options:

Motion for Judgment of Acquittal / Motion to Overturn Verdict: This motion faces a steep standard, as the judge must view all evidence "in the light most favorable to the prosecution" and can only overturn if no reasonable jury could have reached a guilty verdict. Defense must identify compelling legal errors or evidentiary insufficiencies that undermine the conviction.

Motion for New Trial: Often requested simultaneously, based on judicial errors during trial (improper jury instructions, evidentiary rulings), prosecutorial misconduct, newly discovered evidence, or ineffective assistance of counsel. Provides the judge a middle-ground option. If Judge Grants an Acquittal and/or New Trial, the Prosecution has 30 days to appeal to the 4th Circuit Court of Appeals.

During a 4th Circuit Appellate Hearing, each side presents for 20 minutes to a three-judge panel. No new evidence—only legal arguments based on the trial record.

Possible 4th Circuit Rulings:

  •   Affirms Acquittal: Defendant is free, case closed
  •   Affirms New Trial: Defendant faces retrial (may negotiate plea with prosecutors)
  •   Reverses Trial Judge: Guilty verdict reinstated, proceed directly to sentencing

Sentencing Phase of the Game

If conviction stands after appeals, sentencing is mandatory. First, there is a Pre-Sentence Investigation and Pre-Sentence Report which covers prior criminal history, personal background, family situation, substance abuse and mental health history, employment and education, financial status, and victim impact statements.

Defense Preparation considers character reference letters, psychological evaluations, rehabilitation plans, evidence of family responsibilities, post-release employment and housing plans.

Finally, Sentencing Considerations include advisory guidelines calculations, mandatory minimums (if applicable), offense severity and criminal history category, victim harm (physical, emotional, financial), restitution amounts, and Bureau of Prisons placement recommendations (security level, geographic proximity to family, medical needs, specialized programs).

Sentencing Hearing

  1. Prosecution Arguments: Severity of crime, harm to victims, criminal history, need for deterrence
  2. Defense Mitigation: Character evidence, rehabilitation efforts, family circumstances, alternative sentences
  3. Victim Allocution: Victims speak directly to judge about impact and views on punishment
  4. Defendant's Allocution: Constitutional right to address the court before sentencing—an opportunity to express remorse, explain circumstances, and appeal for mercy
  5. Judge's Decision: Announces sentence with reasoning, including prison term (with credit for time served), supervised release, fines and restitution, special conditions, Bureau of Prison facility recommendation.

Technologies and Other Constraints

Many of the previous games have been implemented using Ren’Py. Any other technology that you think would be helpful for the best interest of the game should be considered. 

Sponsor Background

Impartial is a criminal justice nonprofit. We exist to build meaningful human connections and programs that improve the criminal justice system through personal and community-driven engagement. Impartial believes that one of the ways to do that is by engaging the future justice leaders in games that can help them to better understand what the US justice system, what role they could play in it and most importantly, what the system could be by using gaming to understand possibilities.

Background and Problem Statement

Impartial has built nine criminal justice video games to date related to a real case: Investigation, Grand Jury, Plea Deals, Motions to Dismiss, Jury Selection, Prosecution, The Defense, Jury Deliberation, and Sentencing. These games have been developed to provide a picture into the workings of the criminal justice system in the United States, and each correspond to parts of a real trial. Seven games have been developed through the NCSU Capstone project, creating valuable assets we can share across games. For consistency and efficiency, we're using the same characters, names, and scenes throughout the series. 

This semester there is a tenth and final entry in the series: Justice Well-Played: Post-Verdict, which concerns the post-verdict appeals and sentencing path in a federal criminal case and will be developed by another Senior Design Games team concurrent with this project. While these games have a shared case, each game has been standalone, and choices made in each game do not carry over to the other games. This project aims to combine the games into a single experience where choices and narrative will carry throughout the games.

Project Description

Multiple student teams over the past several years have developed separate games within the same narrative of a single court case. These games have required the players to make choices as they play the games; however, as each game was developed separately, the choices made in each game did not carry to any future games. This project aims to combine those previous games into a single coherent experience, where the choices made throughout the games affect future game states, and add additional polish to the game as a whole.

Students working on this project will be incorporating each individual game into the larger, connected narrative spanning the entire case, maintaining consistency in:

  • Documents: Review evidence in the game for consistency and identify any gaps; make sure that each piece of evidence is tied to a witness.
  • Narrative: Character names and roles, legal terminology, storylines, timeline of events, and case facts must align across the investigation, trial, and sentencing phases.
  • Characterization: Characters should behave consistently throughout the game in terms of dialogue, emotions, attitudes, and behaviors.
  • Visuals: Unified art style and character design, consistent courtroom and location aesthetics, appropriate color palette, standard UI, and authentic representation of diverse populations.
  • Gameplay Mechanics: Controls and interfaces (i.e. the notebook mechanic) must work the same way across all game segments, with natural difficulty progression, reliable save/checkpoint systems, clear tutorial elements, and transparent win/loss conditions.

To optimize the gameplay experience, the team will also consider the following:

  • Educational Effectiveness: Clear learning objectives for each module; accurate representation of criminal justice procedures, constitutional rights, and due process; consequences of decisions explained clearly; and debriefing moments that reinforce key lessons.
  • Entertainment Value: Varied pacing to maintain engagement, meaningful choices with visible consequences, character development that creates emotional investment, strategic depth that rewards careful thinking, and replay value through multiple paths and outcomes.
  • Justice Values Integration: Themes of fairness, dignity, and human rights woven throughout; systemic issues presented thoughtfully, building empathy for all parties. Critical thinking about the US justice system’s strengths and flaws is encouraged, with redemption and rehabilitation presented as possibilities.
  • Accessibility Considerations: Adjustable text size and font readability, colorblind-friendly palette options, subtitle options for all audio, adjustable difficulty settings, tutorial/hint systems for less experienced players.
  • Audio Consistency: Background music should reflect the emotional tone of each phase, sound effects enhance realism, and voice acting maintains consistent character voices. Balanced audio levels and smooth music transitions are also goals.

Multiple phases of playtesting should be performed to provide feedback on the game, allowing for the team to polish the connected narratives and ensure accuracy in the representation of criminal justice elements.

Technologies and Other Constraints

Previous games have used Ren’Py. Any other technology that you think would be helpful for the best interest of the game should also be considered.

Sponsor Background

Katabasis is a non-profit organization that specializes in developing educational software for children ages 8-15. Our mission is to facilitate learning, inspire curiosity, and catalyze growth in every member of our community by building a digital learning ecosystem that adapts to the individual, fosters collaboration, and cultivates a mindset of growth and reflection. 

Background and Problem Statement

The difference between a student who thrives in computer science and one who grows to hate it often comes down to their debugging experiences. Until recently, debugging was not an explicit focus of computer science classes; instead, it was assumed students would internalize debugging skills through the practice of programming. Debugging requires students to coordinate multiple skills, such as code reading, writing, tracing, and testing—a finding which has been reinforced in various research studies conducted within the context of text-based programming environments (Fitzgerald et at, 2005; Adelson & Soloway, 1985; Vainio & Sajaniem, 2007; Guzdial, 2015; Spohrer & Soloway, 1986). Additionally, McCauley and colleagues (2008) noted in their comprehensive review of debugging research that proficient debuggers should have knowledge of the intended and current program, an understanding of the programming language, general programming expertise, and knowledge of bugs, debugging methods, and the application domain (McCauley et al., 2008). Despite increasing recognition of the importance of debugging, there remain surprisingly few studies that explicitly teach debugging skills, and even less in K-12 settings. Moreover, it’s unclear how the findings and strategies developed from these studies apply to block-based programming or hybrid environments (Kafai et al., 2020), a format which is increasingly used to teach programming to beginner programming students.

A major challenge is that debugging involves several invisible cognitive processes—such as interpreting code, tracing execution, evaluating program behavior, and debugging bugs—that novices cannot easily observe or learn from. Without tools that make these processes explicit, structured, and visual, young learners struggle to develop effective debugging strategies and lose confidence in their ability to understand code. 

This project addresses that problem by creating an interactive block-based programming experience that visualizes the core components of debugging: (1) code reading, (2) code tracing, and (3) debugging. By decomposing these tasks and designing clear interaction flows, scaffolded steps, and visual/audio cues, the student team will develop a system that makes expert debugging strategies visible, helping novices build stronger problem-solving skills and a deeper understanding of programming. 

Project Description

You will design a series of interfaces that demonstrate and support at least three different debugging activities in a block-based coding environment: (1) code reading, or the ability to look at code and interpret its purpose and functionality, (2) code tracing, the ability to walk through lines of code individually to understand what is happening in sequence, and (3) debugging, the ability to find and fix the bug/error. Your team will decompose each of these tasks to design the experience, including the sequence of interactions, presentation of information, and useful visual/audio cues. 

Your interfaces should feature clean, visually pleasant design and should adhere to common usability design principles, keeping an elementary school target audience in mind. The team may also receive feedback from various stakeholders over the course of the semester, which will influence the project’s design. 

The interfaces you implement should be well-designed and built with extensibility in mind; this project should be easily compatible with other block-based programming environments created in Unity. To demonstrate the efficacy of your design during live demos, you will integrate your project with Agricoding, an existing block-based programming game and farming simulator created by Katabasis, in which players use code to control a virtual drone that tills soil and plants, waters, and harvests crops on a virtual field. 

Technologies and Other Constraints

This project will be created in Unity to ensure compatibility with Agricoding and other block-based programming environments created in Unity. The version of Agricoding which integrates your project must be built to WebGL for interactive demos. Prior experience with Unity or other game engines is preferred. 

Students will be required to sign over IP to sponsors when the team is formed

Sponsor Background

The North Carolina Museum of Natural Sciences (NCMNS) is a natural history museum in Raleigh, North Carolina. The museum is the oldest in the state, and the largest natural history museum in the Southeastern United States. The Paleontology and Geology Lab is one of the museum’s Nature Research Center's five research labs within the museum's Research and Collections department.

Background and Problem Statement

Cretaceous Creatures (https://cretaceouscreatures.org/) is a public science project at the NCMNSNC Museum of Natural Sciences that allows school children to look for and record fossils. There is currently a running exhibit of Dueling Dinosaurs that shows two almost complete dinosaurs that were found in Utah and obtained by the museum.

Students are sent boxes of dirt and sediment from an excavation site with tools for finding and recording small fossils that they find in the sediment. They have a google form for recording their classification of the fossil. There is a database of all specimens with information about them and at the category-level 3D models created by photogrammetry of relevant specimens. Currently the website is hosted in Wordpress and the specimens are recorded in excel files with information collected through google forms. The web framework is static and there isn’t an elegant way to explore the data in the database.

Project Description

This project has 3 thrusts:

  1. Create a scalable backend framework for collecting and storing the data 
  2. Create a frontend experience (e.g. interactive visualization, game-like interactives, simple explanations in comic form) that makes it appealing and engaging to navigate the data and look at contributors and their achievements. 
  3. Develop easy ways to collect and expand on the data. One feature is to integrate photogrammetry to create 3D constructions of all submitted specimens through photographs from multiple (4?) perspectives.

Technologies and Other Constraints

It will be ideal for quick deployment if much of the work can be done through Wordpress plugins and scripts but we can discuss whether this will be too limiting for scalability. If we do choose a different framework then there might be issues with getting it approved by the Museum and NC State OIT for future management effort. This is the trade-off that we will have to address in choosing the technology solution.

Sponsor Background

Dr. Srougi is an associate professor (NCSU Biotechnology Program/Dept of Molecular Biomedical Sciences) whose research interests are to enhance STEM laboratory skills training through use of innovative pedagogical strategies. Most recently, she has worked with a team to develop an interactive, immersive and accessible virtual simulation to aid in the development of student competencies in modern molecular biotechnology laboratory techniques.

Background and Problem Statement

Biopharmaceutical manufacturing requires specialized expertise, both to design and implement processes that are compliant with good manufacturing practice (GMP). Design and execution of these processes, therefore, requires that the current and future biopharmaceutical workforce understands the fundamentals of both molecular biology and biotechnology. While there is significant value in teaching lab techniques in a hands-on environment, the necessary lab infrastructure is not always available to students.  Moreover, it is clear that while online learning works well for conceptual knowledge, there are still challenges on how to best convey traditional ‘hands-on’ skills to a virtual workforce to support current and future biotechnology requirements. The need for highly skilled employees in these areas is only increasing. Therefore, to address current and future needs, we seek to develop virtual reality minigames of key laboratory and biotechnology skills geared towards workforce training for both students and professionals.

Project Description

The project team has previously created an interactive browser based simulation in a key biotechnology laboratory skill set: sterile cell culture techniques. This learning tool is geared towards university students and professionals. In the proposed project, we intend to develop 2 virtual reality minigames using the Unity game engine to reinforce the fundamental skills required to perform more advanced laboratory procedures that are represented in the simulation. The game interactions occur through the Meta Quest 3 VR system.  This project will be a Phase II of a previous senior design project. The refinement for the development of one minigame (i.e. use of a pipet aid, see below) and one prototype biohaptic device was accomplished. This current project proposal will seek to focus on the pipet aid minigame being ready to deliver to users in a classroom setting. Moreover, the project will move forward with the development of a separate minigame dedicated to the use of micropipettes. The enhancements for the pipet aid minigame include: 1) refinement of serial communications to integrate and use the biohaptic in the game, 2) clear and easy to use tutorial for game (especially for users new to VR), 3) clear feedback on user technique of the pipet aid, and 4) ease of navigation within the game environment. Finally, this team will create a second minigame focused on the use of micropipettes. This second minigame will use a similar workflow to the pipet aid minigame and the team will work to design a bespoke biohaptic prototype for micropipette usage that can integrate in the game.   

Minigame content: All minigames will feature the following core laboratory competencies that would benefit exclusively from advanced interactivity and realism: 1) how to accurately use a single-channel set of pipettes and 2) how to accurately use a pipet aid (minigame that has been created).

Length and Interactivity: Minigames should aim to be around a 10-15 min experience. The games should allow users free choice to explore and engage in the technique while providing real-time feedback to correct any errors in user behavior. They should be adaptable for future use with biohaptic feedback technology to provide a ‘real world’ digital training experience. A prototype biohaptic pipet aid has been created and is available to iterate upon and improve. 

Cohesion: The set of minigames should connect to themes and design represented in the virtual browser-based simulation previously developed. Therefore, the visual design of the minigames should closely match the real-world laboratory environment. 

Technologies and Other Constraints

Students working on this project do not need to have the content knowledge of biotechnology or biotechnology laboratory skills. However, a basic interest in the biological sciences and/or biotechnology is preferred. This project will be a virtual reality extension of a browser based interactive simulation written in 3JS within a GitHub repository. Development of the minigames should be built in Unity. Games should be designed to be run on relatively low-end computer systems and guided by accessibility. Proper licensing permissions are required if art and/or other assets are used in game development. 

Students will be required to sign over IP to sponsors when the team is formed