Click on a project to read its description.
Aspida is a tech-driven, agile insurance carrier based in Research Triangle Park. We offer fast, simple, and secure retirement services and annuity products for effective retirement and wealth management. More than that, we’re in the business of protecting dreams; those of our partners, our producers, and especially our clients.
When developers submit pull requests, it is often unclear which platforms, features, or business areas can be impacted by the changes. This lack of visibility slows down QA planning, increases regression risk, and makes it difficult to prioritize testing effectively. Manual tagging and tribal knowledge are not scalable solutions. A tool that leverages machine learning to analyze code changes and provide actionable insights into impacted areas would significantly improve release confidence and streamline the testing process.
Aspida proposes building a tool that automatically analyzes PRs, clusters files by functional areas, and predicts impacted features or components. The impact analysis should consider both structural and semantic relationships within the codebase to provide accurate predictions. The system should:
Bandwidth is a software company focused on communications. Bandwidth’s platform is behind many of the communications you interact with every day. Calling mom on the way into work? Hopping on a conference call with your team from the beach? Booking a hair appointment via text? Our APIs, built on top of our nationwide network, make it easy for our innovative customers to serve up the technology that powers your life.
Any organization working with personal or confidential data requires tools that can remove sensitive information safely and accurately. Manual redaction processes are difficult to scale and can lead to errors. Bandwidth has an opportunity to provide automated, privacy-first tooling that aligns with our trust and compliance commitments.
The AI-Redaction Service is a tool designed to automatically detect and remove sensitive information—such as phone numbers, emails, dates(ex. DOB), credit card numbers, and other Personally Identifiable Information (PII)—from call transcripts or audio. It enhances privacy and compliance for customers using Bandwidth’s call recording and transcription features. Students will build a text-based redaction MVP, with optional audio enhancements as stretch goals.
The Computer Science department at NC State teaches software development using a “real tools” approach. Students use real-world industry tools, such as GitHub and Jenkins, to learn about software development concepts like version control and continuous integration. By using real tools, students are better prepared for careers in computing fields that use these tools.
Google Docs maintains a history of all revisions made to a given document. Several browser plugins exist that allow a user to replay the changes made to a document. These browser plugins allow users to jump backward, forward, or ‘replay’ an animation of the revision history at custom speeds. The plugins also provide analytics, such as displaying a calendar view that identifies which days revisions were made.
GitHub provides a similar experience as Google Docs where a revision history is maintained to track changes to a software codebase. However, it’s often difficult to understand how a code file has changed over time, especially when working with team assignments when there may be hundreds of commits. GitHub provides an interface to manually inspect commit histories, but no interface exists to view a ‘live replay’ of the history of a given file.
Many core computer science courses are considered ‘large’ courses, often with over 200 students enrolled each semester. A tool that allows replaying a full history of changes to a file in a repository over time could help students better understand their own coding approaches (such as “what have I tried already?” or “what work has my team member contributed since our last meeting”). The tool could also help teaching team members understand how to better help students debug logic errors since they could quickly replay a history of edits to a file to know what a student has already attempted.
The user should indicate which GitHub repository and branch they wish to inspect. The user should also be able to select a specific file for which to replay commit history. The tool should allow users to perform actions such as ‘rewind’, ‘fast forward’, ‘pause’ or ‘play’ while presenting a visual replay of all changes made to the given file. Each change should be clearly highlighted and annotated with information such as the name of the person who committed the code. A visual timeline should also be presented to show a history of commits. Within this timeline, the tool should clearly indicate how many lines of code were added/removed from one commit to the next. Additional features may be identified during the course of the project.
The commit history replay tool should be designed and implemented as a web application. The backend should be implemented in either JavaScript/TypeScript or Java as a REST API. The frontend will require the use of visualization libraries, such as D3.js and D3-derived libraries.Using a frontend framework for the frontend such as React is acceptable. The number of dependencies should be limited to those really needed to simplify maintenance of the application.
Dr. Tian is a research scientist who works with children on designing learning environments to support artificial intelligence learning in K-12 classrooms.
Dr. Tiffany Barnes is a distinguished professor of computer science at NC State University. Dr. Barnes conducts computer science education research and uses AI and data-driven insights to develop tools that assist learners’ skill and knowledge acquisition.
Elementary students rarely have opportunities to learn computational thinking, data literacy, and AI concepts through creative, meaningful activities. Traditional instruction often separates storytelling from STEM learning, even though upper-elementary students naturally reason about cause and effect, choices, and narrative structure. Schools and libraries need developmentally appropriate tools that let children learn these skills through playful exploration—not through direct instruction alone.
This project addresses this gap by using AI-supported computational storytelling. The challenge for the student team is to design and develop components of a web-based platform where children (grades 3–5) can create branching stories, test the logic behind their narratives, analyze reader data, and receive age-appropriate AI scaffolding. The motivation is to broaden access to computational thinking and AI literacy skills by embedding them into an engaging creative medium that teachers and librarians can use in classrooms, camps, and community programs.
We envision a modular, child-friendly web platform (see Figure 1 for user flow overview) that guides students through story creation, story logic design, and data-driven story analysis, with optional AI assistants providing hints and reflective prompts.

Figure 1 Storytelling Platform Overview
We envision the platform contains the following components:
Because of the scope of senior design projects, we encourage the student teams to choose one to three modules from 1-4 as a semester project.
Lina, a fourth grader, logs into the AI-in-the-Loop platform for the first time to create a story. She wants to create a story about a clever fox named Mira. Lina clicks on the character card to create Mira and types in Mira’s strengths: smart and brave (see Figure 1 illustrating some of these choices). The AI prompts her to consider weaknesses: “What challenges might Mira face? Adding a weakness can make your story more interesting.” Lina adds: “Mira can’t resist exploring dangerous places.” With that, Mira feels like a real character, and Lina is eager to see what adventures she might have. Lina starts creating an event card: “Mira finds a dark cave in the forest.” Below the card, the platform prompts her to add choices. Lina clicks the “+” button two times and types: 1) explore the cave or 2) run away. Each choice automatically generates a new blank Event Card, which Lina clicks to add descriptions and small illustrations. The system prompts Lina to assign probabilities to each path, and she initially makes them to be equal: explore (50%) and run away (50%). The AI gently suggests: “Since Mira is brave and clever, should the chance of exploring the cave be higher?” Lina thinks for a moment and decides to raise it to 70%. Once Lina has filled in all next-event cards, she clicks “Play Your Story”. She clicks through each choice, watching the story unfold dynamically. Some outcomes succeed, others fail, and the AI gently prompts reflection: “Why do you think your hero lost in this path? Could a different attribute have helped?” Lina adjusts a choice, testing how Mira’s bravery changes the outcome.
Once Lina finishes the story, she publishes her story to the story database on the platform. Over the next week, her peers play her story and some even choose to remix it to extend the story beyond her original narrative, and the platform collects reader data: which paths were chosen most, which failed, and which led to surprise endings. Lina later opens her Data Dashboard to examine the story outcomes, sees that most readers avoided running away, and notices that her “explore” choice led to many successful outcomes.
Children explore computational logic and data literacy through creative storytelling. Educators gain a standards-aligned tool for integrating STEM skills into authentic narrative tasks.
Students can begin by exploring Twine (twinery.org) to understand branching story representation, node-based narrative structure, and basic functionality. This platform will not use Twine directly, but can take some inspiration from Twine’s editor and player workflow.
Suggested Technologies:
Preferred Paradigm:
Constraints & Considerations:
Nathalie Lavoine is an Associate Professor in Renewable (Nano)materials Science and Engineering. As part of her research and teaching initiatives, she focuses on developing sustainable packaging from plant-derived resources to replace petroleum-derived products, lower the environmental footprint of packaging and increase the shelf-life and safety of food products.
By training, Dr. Lavoine is a Packaging Engineer. As an instructor at NC State, she shares her passion through the offering of one annual undergraduate level course on fiber-based packaging and regular guest lectures on this topic.
A common misperception reduces packaging to ‘just a box’. The reality is that sustainable containers are the product of a highly complex, multidisciplinary orchestration. Their development requires the integration of materials science, environmental ethics, mechanical engineering, automated logistics, and beyond. This project is driven by the need to dismantle this reductive view, engaging students, faculty and the public to recognize the technical depth, rigorous labor, and ethical considerations embedded in the materials we use every day.
The field of Sustainable Packaging is a multidisciplinary science involving complex material properties, historical context, and intricate lifecycle data. Traditional educational methods, primarily static lectures and summative paper-based assessments, often struggle to engage students with the highly technical and data-driven nature of the subject. There is a "visualization gap": students can memorize facts about recycling rates or polymer barriers, but they lack an interactive way to see how these elements combine to create a "complete" sustainable solution.
The motivation for this project is to bridge this gap through gamification. By transforming a dense database of packaging science into a competitive, "Kahoot-style" trivia experience, we can increase knowledge retention and provide students with a sense of progress. The project addresses the need for a modern, interactive tool that can be used both for individual study and as a real-time classroom engagement platform.
The proposed solution is "The Sustainable Box," a full-stack web application. The core mechanic is a trivia game where players answer questions across five color-coded categories (such as History, Technical, 3 Pillars, Ethics, and End-of-Life).
Some examples of key features for this game:
Other features can be developed and included. This list is not exhaustive, and I would appreciate student input as they will be the primary audience. We could also think of different levels of difficulties.
Another aspect of the project will be the generation and collection of questions & answers. I may not have enough time and bandwidth to find all the different questions per category (I believe a classic Trivia game relies on 30-50 questions per category). Hence, the student team would be expected to do some additional literature research and to dive into this topic.
Examples of category includes: 1- history & evolution (ancient materials, industrial revolution, mid-century rise of plastics), 2- technical aspects/engineering (polymers, barrier properties, manufacturing process, structural integrity), 3- sustainability (environmental impact, social responsibility, economic viability – global vision), 4- consumerisms & ethics (marketing psychology, regulations, labeling, user experience), and 5- end-of-life and data (recycling rates, biodegradation vs composting, LCA, waste management systems).
LexisNexis Legal & Professional, an information and analytics company, states its mission as: to advance the rule of law around the world. This involves combining information, analytics, and technology to empower customers to achieve better outcomes, make more informed decisions, and gain crucial insights. They strive to foster a more just world through their people, technology, and expertise for both their customers and the broader communities they serve.
In its continuing mission to support the Rule of Law, LexisNexis has around 3,000 people working on over 200 projects per year developing software for its products.
In a rapidly changing environment, as opportunities arise and priorities shift, the question often asked (and rarely answered with confidence) is “What is the consequence of moving someone from one project to another?”
LexisNexis needs an intuitive tool to manage and track the association of people with projects, to provide the insight and data necessary to support business priority decisions.
LexisNexis is looking for a simple, intuitive application that will allow the management of resource allocation to teams and projects.
The tool will be used by Software Development Leaders to group their people into teams and to associate them to projects for a given duration.
It should help Leaders readily determine issues and opportunities with their teams and projects, and take actions accordingly.
The data collected will support decision making when considering the resourcing of projects when competing priorities need to be considered.
It should ultimately also support financial tracking and planning.
This project extends a foundation laid by a previous NSCU Senior Design team.
The team is at liberty to determine with elements of the previous team’s work they wish to retain and which they feel would benefit from rework/reimplementation.
The preferred solution would be an application accessible though Microsoft Teams, LexisNexis’ collaboration tool of choice.
LexisNexis is best placed to support development in C#, .Net, Angular and SQL Server, although the team may consider other technologies if appropriate.
The initial source of data will be CSV/Excel spreadsheets.
Organizational data will ultimately be sourced through Active Directory, available through Microsoft Graph API.
ShareFile is a leading provider of secure file sharing, storage, and collaboration solutions for businesses of all sizes. Founded in 2005 and acquired by Progress Software in 2024, ShareFile has grown to become a trusted name in the realm of enterprise data management. The platform is designed to streamline workflows, enhance productivity, and ensure the security of sensitive information, catering to a diverse range of industries, including finance, healthcare, legal, and manufacturing.
In today’s business environment, every organization—whether a startup, small business, or enterprise launching a new product—must establish a strong and recognizable brand identity. A brand identity includes visual elements such as logos, color palettes, and typography, as well as messaging components like taglines and brand voice. These elements shape how customers perceive a business and significantly influence trust, credibility, and market differentiation.
Historically, creating a cohesive brand identity required hiring branding agencies or freelance designers. These services often cost thousands to tens of thousands of dollars and can take weeks or months to complete. For many small businesses and entrepreneurs, this level of investment is unrealistic. As a result, they frequently launch with inconsistent, unprofessional, or generic branding that limits their competitiveness.
DIY design tools have attempted to democratize branding, but they still require substantial creative skill and strategic understanding. Users must know color theory, design principles, typography, and marketing psychology to produce effective brand assets. Even when tools provide templates or basic logo generators, the results often lack originality and fail to communicate a unique brand story.
Another major challenge is brand cohesion. Ensuring that a logo works with a color palette, that a tagline aligns with the visual identity, and that all elements communicate the right message requires expertise most users do not possess. Existing tools treat each asset separately, leaving users to manually piece together a brand system.
The current landscape has reduced the barrier to creating assets, but not to creating high-quality, cohesive, professional brand identities. We want to further reduce the time, cost, and expertise required by applying generative AI technology. By leveraging AI, a brand identity platform could generate complete, harmonized brand systems based on simple user prompts and intelligent design principles.
The goal of this project is to create an AI-powered Brand Identity Studio that enables users to generate comprehensive, cohesive brand assets—including logos, color palettes, and taglines—through natural language prompts. The system will combine multimodal generative AI (text-to-image, text-to-text, and algorithmic color generation) with a backend platform for managing brand projects, asset versions, and analytics.
There are two primary user personas:
These users initiate brand projects by providing prompts such as “Create a modern, minimalist brand identity for a sustainable skincare company.” They can iterate on AI-generated assets, mix and match components, and export production-ready files.
These users view generated assets, provide feedback, and help evaluate which brand directions best fit the business goals.
The platform will ensure that all generated assets work harmoniously—colors are accessible and complementary, logos scale effectively, and taglines align with the intended brand voice and target market.
AI-Generated Brand Systems
Use generative AI models to produce:
Prompt-Based Generation
Users describe their business, audience, and style preferences. The AI generates:
Asset Types
Customization Tools (Stretch)
Preview Mode (Stretch)
Show assets applied to mockups, such as:
Brand Workspace
An interface where users:
Asset Renderer
Display logos, palettes, and taglines in a cohesive layout.
Export Tools
Allow users to download:
Flexible Schema
Support multiple asset types and versions per project.
CRUD APIs
Asset Validation
Ensure generated assets meet accessibility and quality standards (e.g., color contrast).
AI Model Integration
Use external or custom-trained models for:
Natural Language Input
Users describe their brand vision in plain language.
Storage
Store:
Data Export (Stretch)
Allow export of brand project data for external analysis.
Brand Performance Dashboard
Track:
Industry Benchmarking
Compare generated brand styles to common patterns in similar markets.
Technologies and Other Constraints
Frontend: React is suggested as a front-end framework.
Backend: Python is suggested for any inference or semantic RAG back-end.
Cloud Providers: If cloud providers are used, AWS is preferred.
The Undergraduate Curriculum Committee (UGCC) reviews courses (both new and modified), curriculum, and curricular policy for the Department of Computer Science.
North Carolina State University policies require specific content for course syllabi to help ensure consistent, clear communication of course information to students. However, creating or revising a course syllabus to meet updated university policies can be tedious, and instructors may often miss small updates to mandatory texts the university may require in a course syllabus. There is additional tediousness in updating a course’s schedule each semester. In addition, the UGCC must review and approve course syllabi as part of the process for course actions and reviewing newly proposed special topics courses. Providing feedback or resources to guide syllabus updates can be time-consuming and repetitive, especially when multiple syllabi require the same feedback and updates to meet university policies.
The UGCC would like a web application to facilitate the creation, revision, and feedback process for course syllabi for computer science courses at NCSU. An existing web application enables access to syllabi for users from different roles, including UGCC members, UGCC Chair, and course instructors (where UGCC members can also be instructors of courses). The UGCC members are able to add/update/reorder/remove required sections for a course syllabus, based on the university checklist for undergraduate course syllabi. Instructors are able to use the application to create a new course syllabus, or revise/create a new version of an existing course syllabus each semester. The tool provides the functionality for review comments and resolution, important dates (e.g., holidays and wellness days), and the inclusion of schedule information.
We are building on an existing system. The focus this semester will support template item editing and propagation, syllabus duplication with schedule updates, and front-end refactoring and automated testing.
New features include:
Process improvements include:
Blue Cross and Blue Shield of North Carolina (Blue Cross NC) is the largest health insurer in the state, serving ~5 million members with ~5,000 employees across Durham and Winston‑Salem. Since 1933, Blue Cross NC has focused on making healthcare better, simpler, and more affordable, and on tackling the state’s most pressing health challenges to drive better outcomes for North Carolinians.
Employer HR/Benefits administrators need a straightforward way to perform member maintenance for group insurance: adding subscribers and dependents, terminating members, updating demographics and contact information, and handling effective dating (including retroactive changes). Current processes are often fragmented across multiple tools and require manual interpretation of complex policy/business rules.
Students will build an end‑to‑end system—front to back—that demonstrates a clean user experience on the front end with server-side rendering and Micro UIs. They will include a lightweight AI policy assistant that provides real-time, non-binding guidance during HR workflows. Users can chat to describe their task, and the assistant surfaces relevant policy impacts and prompts for confirmation on downstream effects.
The project uses Vue with Nuxt 3 (SSR) for the frontend, adopting a micro‑frontend architecture via Vite Module Federation. The backend runs on Java with Quarkus, exposes GraphQL APIs, and supports any relational database (PostgreSQL recommended). Constraints include synthetic/mock data only, deployment preferably on OpenShift/Kubernetes, and stretch goals for observability (OpenTelemetry + Tempo),TLS encryption, SSO with Ping OAuth.
The Campus Writing & Speaking Program (CWSP), within the Office for Faculty Excellence (OFE), supports faculty in embedding oral, written, and digital communication across the curriculum. Co-directed by Dr. Kirsti Cole and Dr. Roy Schwartzman, with Senior Strategic Advisor Dr. Chris Anson, CWSP leads the Writing & Speaking Enriched Curriculum (WSEC) and ACI initiatives. Its work has helped position NC State as a top US university for writing-in-the-disciplines.
Phase I delivers the CWSP ACI Certificate Companion, a unified system for faculty to track progress, store artifacts, and submit capstones.
Phase II expands this foundation by creating a Student Partner Program that connects undergraduates with ACI faculty projects, while also layering in advanced innovation features.
Key possibilities include:
Phase II is visionary but realistic: it leverages the Phase I system, introduces a student role, and layers in future-facing features that distinguish NC State as a leader in communication innovation.
Vision. Extend the platform into a multi-role ecosystem where faculty and students collaborate, supported by integrated resources and innovative digital tools.
Fit for Senior Design. Requiring scalability, permissions logic, and user-experience innovation. Provides opportunities for backend, frontend, AI integration, and UX design.
Timeline and scope for the Spring 2026 semester to be determined with sponsors.
Jenn Woodhull-Smith is a lecturer in the Poole College of Management and has developed open source textbooks for several entrepreneurship courses on campus. The creation of open source micro-simulations based on the course content will not only enhance student learning in and outside of the classroom, but also provide a cost effective learning tool for faculty and students.
Currently, there is a significant lack of freely accessible simulations that effectively boost student engagement and enrich learning outcomes within educational settings. Many existing simulations are typically bundled with expensive textbooks or necessitate additional purchases. An absence of interactive simulations in an Entrepreneurship course diminishes student engagement, limits practical skill development, and provides a more passive learning experience focused on theory rather than real-world application. This can reduce motivation and readiness for entrepreneurial challenges post-graduation.
Our primary goal is to develop an open source simulation platform that initially supports the MIE 310 Introduction to Entrepreneurship course, but could be later made accessible to all faculty members at NC State and eventually across diverse educational institutions.
The envisioned software tool is a versatile open source tool designed to create visual novel-like mini-simulations with content and questions related to a certain course objective. The intent is to empower educators to be able to create their own simulations on a variety of different topics. Faculty will be able to develop interactive learning modules tailored to their teaching needs. This tool needs to be able to export grades, data, and other relevant information based on the following requirements:
Suggestions for the Spring 2026 team are the following:
Technologies used in prior semesters include:
Decidio uses Cognitive Science and AI to help people make better decisions and to feel better about the decisions they make. We plan to strategically launch Decidio into a small network fanbase, then grow it deliberately, methodically and through analytics into a strong and accelerating network.
Consumer platforms live or die by their ability to solve the Cold Start Problem. We require tooling to simulate, interrogate, and forecast network formation under different strategic assumptions so that (1) execution aligns tightly with model predictions, (2) investor communication is concrete and falsifiable, and (3) once launched, live telemetry can be visualized against projections for adaptive steering.
The solution includes the implementation of a Domain Specific Language embedded in a Common Lisp REPL and a visualization engine provided through a webapp. The DSL will be provided and is designed to closely mirror the thinking process of membership network creation experts and strategists. Lisp will be the supporting language with its highly interactive Read-Eval-Print-Loop and IDE ecosystem. All of Lisp's capabilities will be directly exposed to the user so they can create anything from simple imperative simulations (scripts) to applicative algorithms to functional schemes, etc. The webapp will be responsible for presenting the visualizations in extremely sophisticated and polished ways. (This is not a behind-the-scenes utility.)
Steel Bank Common Lisp (SBCL) will be used exclusively, along with items like QuickLisp for package management and preferably Emacs + Slime for the IDE. Technologies under consideration for bridging the REPL to server and/or webapp include drakma, hunchensocket, woo + woo-websock, clws and websocket-driver. Strong preferences for the bridge will be taken into consideration; otherwise, Decidio will provide initial guidance. The visualization engine should run purely on the server and/or webapp client. The stack will be standard full-stack TypeScript/JavaScript, CSS, HTML and Node.js. The visualization itself will be a Temporal Force-Directed Graph (similar to https://observablehq.com/@d3/ temporal-force-directed-graph) with full playback capability. For this we recommend using D3.js.
Hitachi Energy develops advanced transformer design optimization software, including AIDA, to support engineering teams in creating efficient and reliable solutions. Comprehensive documentation is critical for maintainability, onboarding, and accelerating development cycles.
The AIDA codebase, written in C#, currently lacks adequate documentation, making it challenging for developers to understand class structures, methods, and parameters. Manual documentation is time-consuming and prone to inconsistencies. There is a need for an automated, AI-driven approach to generate accurate and searchable documentation for the entire codebase.
The goal of this project is to automate documentation generation for the AIDA C# codebase using AI and modern tooling:
This solution will ensure maintainability, improve developer productivity, and provide a scalable approach for future projects.
A high-level view of the proposed solution is as shown below:

Develop the solution as a SaaS tool hosted on Microsoft Azure, leveraging OpenAI APIs, Roslyn, and DocFX within the appropriate technology stack. While designed for automated documentation generation with minimal human intervention, incorporate a human-in-the-loop approach to allow developers to review, provide feedback, and override AI-generated comments and summaries.
Hitachi Energy serves customers in the utility, industry, and infrastructure sectors with innovative solutions and services across the value chain. Together with customers and partners, we pioneer technologies and enable the digital transformation required to accelerate the energy transition toward a carbon-neutral future.
Hitachi Energy specializes in traction transformers designed for transportation applications. These transformers deliver high uptime, safety, and reduced energy costs through superior efficiency and lightweight construction. They are engineered for resilience in harsh environments and unstable grids, ensuring low maintenance and reliable performance.
Currently, tracking the status of projects within the Traction Transformer portfolio requires manual collation of data from multiple sources (project management tools, emails, spreadsheets). This process is maintained in Excel, which is not scalable and consumes significant human resources.
The objective of this project is to develop an automated AI-driven pipeline that:
This solution will streamline project monitoring, reduce manual effort, and enhance decision-making through timely and accurate information.
A high-level view of the proposed solution is as shown below:

Develop the solution as a SaaS tool hosted on MS Azure, leveraging the appropriate technology stack. While designed for minimal human intervention, incorporate a human-in-the-loop approach to allow feedback and override AI decisions.
Progress Software is a global enterprise software company that builds tools and platforms used by developers and organizations worldwide to create, deploy, and operate modern applications. As part of our ongoing focus on innovation and AI-driven transformation, this project is sponsored by Progress’s centralized Software Development Lifecycle (SDLC) organization, which is responsible for evolving how 1,000+ engineers across the company build software—exploring how AI, automation, and agent-based systems can meaningfully improve developer productivity, software quality, and delivery speed.
Progress engineering teams work across multiple CI/CD tools—Jenkins, GitHub Actions, Harness, Azure DevOps, and Buildkite—each with different conventions, security controls, and maturity levels. Engineers are often asked to perform tasks in a tool they don’t know (X) even though they are proficient in another (Y). While our SDLC strategy emphasizes Pipelines-as-Code, reusable components, and embedded governance, there is no capability that guides engineers step-by-step to implement changes “the Progress way” in unfamiliar tools.
This gap leads to slower onboarding, uneven compliance, duplicated effort, and inconsistent application of shared modules/templates. The problem is not a lack of standards or inner source assets—it’s making those assets immediately actionable, with guided translation from the engineer’s starting point to the target tool and pipeline pattern.
Design and prototype an AI assistant that helps engineers complete real CI/CD tasks in any of our pipeline tools by translating from what they already know (Y) to what they need to do (X) while enforcing Progress best practices.
What the assistant will do:
“I know Azure DevOps in MOVEit. Help me write and roll out a new common module in Harness the Progress way—explain each step in Azure DevOps terms so I learn while doing.”
Outcomes for end users: Faster, compliant delivery; reduced support burden on experts; increased reuse of golden modules; improved SDLC scorecard metrics (lead time, deployment frequency, change failure rate).
Preferred technologies (flexible based on availability):
Siemens Healthineers develops innovations that support better patient outcomes with greater efficiencies, enabling healthcare providers to meet the clinical, operational, and financial challenges of a rapidly changing healthcare landscape. As a global leader in medical imaging, laboratory diagnostics, and healthcare information technology, Siemens Healthineers has deep expertise across the entire patient care continuum, from prevention and early detection to diagnosis and treatment.
Within Siemens Healthineers, the Managed Logistics organization supports service engineers who perform planned and unplanned maintenance on imaging and diagnostic equipment worldwide. The Managed Logistics team plays a critical role in ensuring that the right replacement parts reach the right engineer at the right place and time, directly contributing to reliable patient care.
Siemens Healthineers’ software developers and data analysts rely on a large and growing body of internal documentation, including OneNote notebooks, PDF documents, and code repositories. Over time, this information has become difficult to search and navigate, leading to siloed knowledge, inconsistent understanding across the team, and challenges onboarding new team members efficiently.
Currently, finding relevant information often requires manual searching, asking colleagues for guidance, or relying on institutional memory. These approaches can be time-consuming and prone to misunderstandings. As teams and documentation continue to grow, there is a clear need for a more effective way to surface relevant internal knowledge and ensure that team members share a common, accurate understanding of processes, systems, and best practices.
The goal of this project is to design and build an Internal Knowledge Companion: a web-based, AI-powered assistant that helps internal users quickly find, understand, and reference information contained within Siemens Healthineers’ internal documentation.
The envisioned solution will leverage retrieval-augmented generation (RAG) techniques, combining an open-weight large language model with a document retrieval system. Rather than training a language model from scratch, the system will retrieve relevant content from internal documents at query time and use the language model to generate clear, context-aware responses grounded in those sources.
Example use cases include:
The system will be designed to adapt over time, allowing new documents to be added without retraining the language model. By improving information accessibility and consistency, the Internal Knowledge Companion will enhance onboarding, reduce time spent searching for information, and help teams stay aligned.
Requirements:
Alen Baker is a 1973 graduate of the NCSU CSC department, and 2017 Hall of Fame inductee. He sponsored multiple Senior Design teams over his 20 years employed by Duke Energy. After retiring, he established the Fly Fishing Museum of the Southern Appalachians in Bryson City. He is currently developing a non-profit fly fishing center to support non-profit organizations with education and activities that utilize fly fishing as a means of recovery and enrichment, including programs for veterans with PTSD, cancer patients, foster youth, and scouting merit badges.
The Cap Wiese Fly Fishing Center resides within the historic Patterson School Campus. The private middle to high school closed in 2009 and reopened as a community education center for various education opportunities such as STEM camps and conservation education. In 2024, Alen Baker proposed that the campus be open to fly fishing organizations that ”give back” via fly fishing instruction and support to those that have an interest in fly fishing as a recovery mechanism as well as conservation. Fly fishing activities are spread among multiple classrooms as well as the campus bodies of water. The campus is 1400 acres in total. This makes it difficult to know and record who visits the campus and where they are located. Obtaining a liability of waiver and the identity of anglers utilizing the facilities is a challenge, especially with the limited office hours. Automation would allow a check-in process to be available 24/7. Digital records would allow for usage analysis.
This project will replace the existing manual campus access procedures with a fully automated, easy-to-use QR- activated cellular phone application. Essential functions include creating and maintaining an identity record for each individual who has physical contact with the campus, as well as obtaining signed waivers for both adults and minors. Upon access, a notification will be issued to the administrative office and a record of the event will be posted for future viewing and analysis.
Desired data for retention and functional use may include identity records of individuals including selected specifics that profile for fly fishing interests, signed waivers for liability release, event details, organizational partnerships and linked inventory records related to applicable assets.
In addition to Google technologies, the student team will employ software to create and deploy a mobile app, including software to read QR codes.
Hitachi Energy is committed to delivering high-quality products and services across the energy value chain. Ensuring product reliability and compliance with stringent quality standards is critical to maintaining customer trust and operational excellence.
Quality teams currently manage disposition workflows and analyze historical failure data through manual processes and fragmented tools. This approach increases the risk of overlooking test failures, delays in corrective actions, and limited traceability of issues. There is a need for an intelligent, automated system that not only streamlines workflows but also provides transparent reasoning behind AI-driven recommendations.
The objective of this project is to develop an AI-enabled Quality Support Platform that:
This solution will empower quality teams with actionable insights, reduce manual effort, and enhance overall product quality.
A high-level view of the current and proposed solution is as shown below.

Develop the solution as a SaaS tool hosted on MS Azure, leveraging the appropriate technology stack. While designed for minimal human intervention, incorporates a human-in-the-loop approach to allow feedback and override AI decisions.
Hitachi Energy is a global leader in transformers and digital solutions. The CoreTec is a piece of Linux hardware which monitors transformer health using data from sensors. It is critical for the transformer ecosystem because it allows operators to track and fix issues before they become larger problems.
The CoreTec currently uses a basic hardware watchdog to maintain system stability. In general, the purpose of a hardware watchdog is to restart a system whenever a critical issue occurs.
With the current implementation, the hardware watchdog toggles a GPIO (General Purpose Input/Output) flag every second. If the flag is not toggled within a certain time interval, the watchdog assumes there is a critical issue and triggers a hardware restart.
This type of hardware watchdog is effective for detecting system faults, but it does not monitor process-level failures or system resource usage. It would be beneficial for the CoreTec to have a more fully-featured watchdog.
The goal of this project is to implement a software watchdog which monitors system resources and processes. The project will improve system resilience and provide more granular control over issue recovery.
Technical Details:
The technologies are required unless otherwise stated.
Development Environment
Programming Language
Build Tools
Static Analysis Tools
External Libraries
JSON File
Other Tools and Constraints
The Laboratory for Analytic Sciences is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information.
Artificial Intelligence models can now perform many complex tasks (e.g. reasoning, comprehension, decision-making, and content generation) which until recent years have only been possible for humans. Like humans though, an AI model generally works best on tasks that it was specifically trained to perform. While general-purpose models (often called foundational models, or pretrained models) can have surprisingly strong performance across a range of applications in their domain, they are typically outperformed within any particular subdomain by a model which was specifically trained for that more narrow subdomain. The most common approach to building these more specialized models is to start with a foundational or pretrained model, and then fine-tune it with a dataset in the more narrow subdomain so that the result is specifically trained, and hyper-focused, on that subdomain.
For example, consider the speech-to-text (STT) model Whisper from OpenAI. Out of the box, this model is capable of producing very accurate transcriptions over a wide range of speech audio recordings (i.e those having differing languages, dialects, accents, noise environments, verbiage, etc). Now suppose that a user is only concerned with transcribing speech audio originating from a single environment and a single speaker, e.g. perhaps a recording of a professor’s lectures throughout a semester. This is a far
more narrow subdomain of application. A data scientist could, of course, apply Whisper and move on to other projects. However, if squeezing out the best accuracy possible is deemed worth the effort, then that data scientist could consider fine-tuning a custom version of Whisper for this particular application.
To fine-tune Whisper, the data scientist would start by considering Whisper to be a pretrained model, i.e. a starting point for the eventual model to be trained. Then the user could gather a relatively small set of labeled data, meaning recordings that are manually transcribed with ground truth transcriptions. In the lecture recording example, this might mean going to class for the first week of the semester, recording the audio, and manually transcribing everything that was spoken. With this labeled dataset in hand, the next step would be to fine-tune Whisper. Optimal procedures for fine-tuning an AI model can be a very complex process, and is perhaps both an art and a science, but general procedures are generally available. The result will be a fine-tuned Whisper variant that, in all likelihood, will produce more accurate speech-to-text results, for future recordings of that professor’s class, than the original Whisper model will. Important to note, this fine-tuned model may presumably perform worse than the original Whisper model on most other applications.
Working with previous senior-design teams, LAS has developed an online tool, TuneTank, to help streamline the process of fine-tuning a Whisper model to a given dataset. It is expected that this will enhance the efficiency and effectiveness of the process/results. The existing, fine-tuning interface has very basic support for evaluating the fine-tuned model or for selecting a model that is best suited to a particular dataset.
To complement TuneTank, the LAS would like a Senior Design team to develop a python program to evaluate whisper finetunes. Given one or more whisper models, the program should benchmark each of the models on several pre-determined and user-specified datasets using metrics like levenshtein distance and word error rate. Once the benchmark is completed, the program should recommend the best overall whisper model and the best whisper model for special use cases (noisy data, multilingual data, etc.)
The LAS will provide the team with one or more data set(s) with which to use for development and testing. The LAS will also provide the team with experienced mentors to assist in understanding the various AI aspects of this project, with particular regards to the fine-tuning methodologies to be implemented. However, this is a complex topic so at least half the team should have strong interest in the topic of machine learning/artificial intelligence.
The team will have great freedom to explore, investigate, and design the benchmarking system described above. However, the methodology employed should not have any restrictions (e.g. no enterprise licenses required). In general, we will need this technology to operate on commodity hardware and software environments, and only make use of technologies with permissive licenses (MIT, Apache 2.0, etc). Beyond these constraints, technology choices will generally be considered design decisions left to the student team. The LAS will provide the student team with access to AWS resources for development, testing and experimentation, including GPU availability for model training.
ALSO NOTE: Public distributions of research performed in conjunction with USG persons or groups are subject to pre-publication review by the USG. In the case of the LAS, typically this review process is performed with great expediency, is transparent to research partners, and is of little to no consequence to the students.
The Laboratory for Analytic Sciences is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information.
Artificial Intelligence models can now perform many complex tasks (e.g. reasoning, comprehension, decision-making, and content generation) which until recent years have only been possible for humans. Like humans though, an AI model generally works best on tasks that it was specifically trained to perform. While general purpose models (often called foundational models, or pretrained models) can have surprisingly strong performance across a range of applications in their domain, they are typically outperformed within any particular subdomain by a model which was specifically trained for that more narrow subdomain. The most common approach to building these more specialized models is to start with a foundational or pretrained model, and then fine-tune it with a dataset in the more narrow subdomain so that the result is specifically trained, and hyper-focused, on that subdomain.
For example, consider the speech-to-text (STT) model Whisper from OpenAI. Out of the box, this model is capable of producing very accurate transcriptions over a wide range of speech audio recordings (i.e those having differing languages, dialects, accents, noise environments, verbiage, etc). Now suppose that a user is only concerned with transcribing speech audio originating from a single environment and a single speaker, e.g. perhaps a recording of a professor’s lectures throughout a semester. This is a far
more narrow subdomain of application. A data scientist could of course apply Whisper and move on to other projects. However, if squeezing out the best accuracy possible is deemed worth the effort, then that data scientist could consider fine-tuning a custom version of Whisper for this particular application.
To fine-tune Whisper, the data scientist would start by considering Whisper to be a pretrained model, i.e. a starting point for the eventual model to be trained. Then the user could gather a relatively small set of labeled data, meaning recordings that are manually transcribed with ground truth transcriptions. In the lecture recording example, this might mean going to class for the first week of the semester, recording the audio, and manually transcribing everything that was spoken. With this labeled dataset in hand, the next step would be to fine-tune Whisper. Optimal procedures for fine-tuning an AI model can be a very complex process, and is perhaps both an art and a science, but general procedures are generally available. The result will be a fine-tuned Whisper variant that, in all likelihood, will produce more accurate speech-to-text results, for future recordings of that professor’s class, than the original Whisper model will. Important to note, this fine-tuned model may presumably perform worse than the original Whisper model on most other applications.
The Laboratory for Analytic Sciences (LAS) has been fine-tuning AI models for many years, and expects to continue doing so for many more. So, it would be desirable to make this process as efficient, effective, and user-friendly as possible. In general, fine-tuning efforts at the LAS are done on an individualized basis, using a disorganized bevy of Jupyter Notebooks and data formatting scripts. This introduces unwelcome overhead into the actual process of creating useful models quickly.
A fall 2025 senior design team helped to automate and simplify the process of fine-tuning a model. They created TuneTank, a web application that lets users create and manage a queue of fine-tuning jobs. The user can create new jobs, with an easy-to-use interface for specifying the most essential fine-tuning parameters, and the application offers a basic interface for starting, suspending and resuming fine-tuning jobs.
TuneTank focused primarily on ease-of-use through a real-time web UI. We would like to build a second version of TuneTank that supports additional finetuning parameters and techniques like lora or quantized training. Given some of the architectural limitations of TuneTank (docker and wav2vec clustering integration) and feedback from the previous team, starting from scratch and using a different tech stack is probably the way to go.
The LAS will provide the team with one or more data set(s) with which to use for development and testing. The LAS will also provide the team with experienced mentors to assist in understanding the various AI aspects of this project, with particular regards to the fine-tuning methodologies to be implemented. However, this is a complex topic so at least half the team should have strong interest in the topic of machine learning/artificial intelligence.
NOTE: Commercial applications for the purpose described above do already exist in some form on the market. If the team decides to take inspiration (or even portions of actual software) from such applications that is fine with the LAS…so long as the constraints below are not violated, nor of course any legal restrictions.
The team will have great freedom to explore, investigate, and design the fine-tuning system described above. However, the methodology employed should not have any restrictions (e.g. no enterprise licenses required). In general, we will need this technology to operate on commodity hardware and software environments, and only make use of technologies with permissive licenses (MIT, Apache 2.0, etc). Beyond these constraints, technology choices will generally be considered design decisions left to the student team. The LAS will provide the student team with access to AWS resources for development, testing and experimentation, including GPU availability for model training.
ALSO NOTE: Public distributions of research performed in conjunction with USG persons or groups are subject to pre-publication review by the USG. In the case of the LAS, typically this review process is performed with great expediency, is transparent to research partners, and is of little to no consequence to the students.
OpenDI's mission is to empower you to make informed choices in a world that is increasingly volatile, uncertain, complex, and ambiguous. OpenDI.org is an integrated ecosystem that creates standards for Decision Intelligence. We curate a source of truth for how Decision Intelligence software systems interact, thereby allowing small and large participants alike to provide parts of an overall solution. By uniting decision makers, architects, asset managers, simulation managers, administrators, engineers, and researchers around a common framework, connecting technology to actions that lead to outcomes, we are paving the way for diverse contributors to solve local and global challenges, and to lower barriers to entry for all Decision Intelligence stakeholders.
OpenDI’s open source initiative is producing the industry standard architecture for Decision Intelligence tool interoperability, as well as a number of example implementations of OpenDI compliant tools and associated assets. The initiative's philosophy is to develop in the open, so all projects are available on github.com.
Decision Intelligence is a human-first approach to deploying technology for enhancing decision making. Anchoring the approach is the Causal Decision Model (CDM), comprising actions, outcomes, intermediates, and externals as well as causal links among them. CDMs are modular and extensible, can be visualized, and can be simulated to provide computational support for human decision makers. The OpenDI reference architecture provides a specification of CDM representation in JSON as well as defines an API for exchanging CDMs; however, there is no existing tool that allows curation, provenance, and sharing of these extensible CDMs. This project will provide OpenDI’s Model Hub, similar to Docker Hub for containers or Hugging Face for AI models, to allow public browsing, searching, and sharing of CDMs.
The current state of the OpenDI Model Hub is a partial implementation which lacks the richness and robustness needed to be a place for community contributions of DI models. In particular, tooling to support the local creation or editing of models that can be pushed and pulled to the hub are required.
The best way to think about OpenDI’s Model Hub is by looking at Docker Hub.
Users should be able to:
The existing model hub has the basic functionality for requirements 1, 3, 4, 5, and 6 as a proof of concept. Students this semester will emphasize a full account integration (requirement 1), model ownership and sharing (requirement 6), and tool creation integration (requirement 8).
This project will require the team to contribute directly to the OpenDI opensource assets. OpenDI assets are developed publicly on GitHub, and the result (and process) of this project will be hosted there as well---students will be expected to contribute to the public OpenDI repositories on github.com. This means team members will be expected to follow OpenDI community contribution standards and to contribute their work under the license OpenDI selects. Team members are encouraged to use their own GitHub accounts to get credit for their contributions.
The existing Model Hub has a backend written in Go and frontend in React. Students will be extending these implementations, so familiarity with Go and React is encouraged. A prior team began development of a CLI tool in Python, which this semester’s team will extend.
Okan Pala and the NC State Office of Research Commercialization are working together to develop a “proof-of-concept” location-based ad platform. A Spring 2025 Senior Design team worked initially on this and created an application that has the backbone of an advertising system. We aim to build on top of previous work completed to add improved functionality to complete the initial version of the system.
Problem 1: Direct interactivity with mobile digital displays (MDDs) and/or fixed displays does not exist. Static advertisements on vehicle-top and in-vehicle displays are cumbersome, expensive, and not targeted to specific individuals or groups. A big segment of the population is cut off from the market. There is no easy way for a mom-and-pop store to put advertisements on taxis. They most likely need to go through an advertising agency, and it won’t be targeted (geographically and temporally) in most cases. Similarly, there is no avenue for individual expressions on public digital displays (mobile or otherwise).
Problem 2: There is no personalization, customization or user input into the advertisements/messages placed on vehicles (especially within the specific proximity of a given location).
Problem 3: Personal expression of messages in public spaces through digital displays are not available and when available severely restricted or not geo-targeted.
Problem 4: Accurate evaluation and reporting of the effectiveness of digital displays that determines the return of investment for the advertisers is hard to achieve. There is a need to implement solutions to measure and report the effect of mobile digital display advertisements (or personal messages) through computer vision and other approaches.
This project involves developing a mobile app and software infrastructure to manage and deliver geo-specific and temporally targeted ads and personal messages to digital vehicle-top and in-vehicle displays. Users will have control over the message or ads’ timing (temporal control) and location (geo-fenced).
An initial proof-of-concept app-based system that places ads/personal messages on MDDs was developed in Spring 2025 by a previous senior design team. It included a bidding system to allow users to outbid others for a message (or ad) to be shown at a specific time and place. There also should be an option for the passengers of vehicles or the owners/drivers of the fleet to interact with the MDDs through another display placed inside the vehicle. This could be a part of the customer facing app or the interface that is used by the advertisers.
The revenue sharing business logic is based on real-time revenue from mobile digital displays and the effectiveness of each campaign. Ad or messaging revenues from digital displays are shared with the vehicle (or fleet) owner. The application business logic should be specific to the user type and should be accompanied by a user-specific interface. We have identified five user groups. These are the Advertisers, Providers, Riders and the Ad Targets (people who are the target audience for the advertisement or messaging). The fifth user type is the system manager that will interact with/oversee the system who is the representative of the company itself.
The “Advertisers” can be Ad agencies, large corporations, local chains, local single businesses (this includes political campaigns), riders themselves, individual users (non-business) and governmental users (local, regional, national). The advertising users may or may not need content creation assistance. An existing version of the app includes an AI image generator to help users create ad campaigns (minus an incorporated QR code generator). Advertisers also need a spatially enabled dashboard to be able to track their campaign (included in the previous version) and evaluate the effectiveness of each campaign (still needed). This dashboard can receive information from the ad evaluation system and individual displays, mobile apps, accounting system etc. Advertiser account creation (Individual and corporate), spatially/temporally enabled campaign creation that includes an AI assisted ad campaign creation, as well as bidding business logic was developed in a previous version. The previous version also includes an admin panel for advertisers to review their existing campaigns and make changes.
The “Providers” are grouped as individual vehicle owners (i.e. uber driver, independent taxicab owner, personal vehicle owner, etc.) and fleet owners (Taxicab companies, businesses with their own service fleets such as HVAC companies, etc.). In the future this also may include government entities.
The “Riders” are composed of the private citizens or corporate partners that would be our future affiliates. This group is valid only for the MDDs that are on vehicles for hire. The “Ad targets” are the people who potentially would see the ad or message and respond to it in some way. One example for this is them clipping a digital coupon through a QR code provided on the ad. Explicit interaction through Ad target action or implicit data created through computer vision (i.e. a near-time eye contact detection with a counter) would be some of the inputs that would make up the ad evaluation business logic. This also might include external data sources such as Google crowd and business customer presence data with other available metrics.
The last user type is the “System Manger,” This user type will have sub-types in the future with varying privileges but initially this is the user that will oversee the campaigns, intervene when needed and approve the campaigns with assistance from an AI helper.
We would like the existing API to be expanded in such a manner that, in the future, the system could work with other entities (e.g., Uber and Lyft) to expand past the Taxi industry and individual drivers.
McDonalds 2-5pm 50% off on Coffee products
Local McDonald’s store chain owners agree to a joint ad campaign with a 50% off coupon linked to a QR code displayed on the ad. Three versions of this ad campaign are created. One is to show within 3 miles proximity to each McDonalds restaurant location with a higher bid limit, one is to show in the whole Triangle region with very low bid and another to be shown in urban areas within 1-mile walking distance of the restaurant locations with a high bid. Each MDD displays 6 ads in each minute, depending on their location and the bids’ locations and amounts. The length of each bid being displayed varies depending on the bids. In addition, if a QR code is used that is linked from the MDD, then the company pays another additional amount.
Ride-hail or taxi cab hired
After entering the ride vehicle (or in advance), the rider would pay the system to display their own ad/personalized message on the MDD. The rider would get a slight priority on the bidding system. The driver (or the fleet owner) can choose either to generate revenue by allowing others to display an ad on their MDDs or choose to display their own ads with heavy priority on the bidding system.
Big events are large revenue opportunities since mobile digital display ads or messages will get more impressions. The advantage of this app is that the potential for income increases during special events. For example, some special events and crowded areas would provide more opportunities for advertisers to reach wider segments. The more people in the area to see the ads means that the more revenue is generated hence the higher the income for the provider.
Personalization of Vehicle-top and In-vehicle displays
When a vehicle picks up the call, exterior display systems start displaying personalized messages, logos, etc. immediately, so that the app users can recognize the taxi coming to pick them up. This could also be used by corporate customers for their own people.
Riders get a choice to display their own message or ad with a biding advantage. For example, corporate customers may choose to display their own corporate logo, message, or ad.
We are flexible about technology. The team should research the best available technology and use that for each component and make the system design accordingly. As for location-specific analysis, we know that ESRI (GIS software system vendor) has the technology for geofencing. They also do have a development platform for app development, but we are not sure if this is the best option for a robust application. We will start out with the technology choices that the previous team made and change as necessary.
SAS provides technology that is used around the world to transform data into intelligence. A key component of SAS technology is providing access to good, clean, curated data. The SAS Data Management business unit is responsible for helping users create standard, repeatable methods for integrating, improving, and enriching data. This project is being sponsored by the SAS Data Management business unit in order to help users better leverage their data assets.
An increasingly prevalent and accelerating problem for businesses is dealing with the vast amount of information being collected. Combined with lacking data governance, enterprises are faced with conflicting use of domain-specific terminology, varying levels of data quality/trustworthiness, and fragmented access. Lack of timely and accurate data reporting may end up driving poor decisions, operational inefficiencies, and financial losses. This is in addition to the exposure of businesses to regulatory penalties, compliance failures, and reputational damage and ultimately putting businesses at a competitive disadvantage.
To help address the underlying issues related to managing their data, a business may buy, or build, a data governance solution that allows them to holistically identify and govern an enterprise's data assets. SAS has developed a data catalog product which enables customers to inventory the assets within their SAS Viya ecosystem. The product also allows users to discover various assets, explore metadata, and visualize how the assets are used throughout the platform.
In past semesters, SAS student teams have explored different avenues and extensions to a data catalog such as operationalizing governance and creating accessible components to visualize metadata. In this project, we'd like to focus instead on the discovery and, primarily, exploration phase. A data catalog provides a curated view of the metadata that is stored within the environment. This is with the intention to provide easy to understand dashboards and visualizations, about specific types of metadata, to a wide-spread audience. When displaying cataloged assets to an end user, a view of the metadata for a specific context or user persona is presented. The view will hide some of the metadata and the underlying complexity.
And this is where we get to one of the weaknesses of visualizing a data catalog. The catalog provides a metadata system to support storing any type of metadata object, but displaying that information to end users (outside of just an API) in a useful way is difficult without an understanding of the context (including the business, domain, and user personas).
What the sponsor would like to investigate in this project is a metadata explorer. An interface that gives the user unfiltered access to all the metadata in the environment (that they are authorized to view). If the catalog is akin to a traditional, physical library card system then the explorer is akin to walking through the bookstacks or shelves. An explorer opens up the possibilities for end users. If we're already providing an interface to view the metadata, then why not also be able to edit it? Or for example, once a user understands the underlying metadata then they could create a customized dashboard.
On startup, the application must ingest/load a pre-defined set of metadata (in a JSON format). An initial set of metadata will be provided by the sponsors as well as a script for the generation of synthetic metadata. All metadata provided will conform to the Open Metadata schema (https://docs.open-metadata.org/v1.4.x/main-concepts/metadata-standard/schemas).
The application must provide an API to perform create, read, update, and delete (CRUD) operations upon metadata in the system.
The application must provide an easy-to-use, performant interface for users to explore the metadata available in system. The students should brainstorm various approaches or UI patterns that could be used to allow users to visualize the metadata.
The application should provide the following functionality:
Out of Scope
The following are out of scope for this project:
The NC State Computer Science Undergraduate Programs and Advising Team is responsible for the successful recruitment, retention, and graduation of over 1,500 undergraduate students. Our mission is to guide students through the curriculum, policies, and opportunities available within the department, college, and university. A core component of this is providing accurate, timely, and accessible advising resources to all students to help them develop complete and reasonable plans for progress towards degree.
The volume and complexity of information regarding undergraduate degree requirements, concentrations, tracks, and departmental policies often make it challenging for students to navigate their academic journey. While this information is available online across various official NC State, College of Engineering, and Computer Science pages, students frequently struggle to synthesize it to create long-term plans or identify relevant co-curricular opportunities. This leads to an overwhelming number of repetitive advising questions and a reliance on human advisors for easily answerable inquiries, reducing the time advisors have for complex, individualized student needs.
The student team will create an advising chatbot that will synthesize curated information from across the university that support students in their academic journey. The chatbot will be trained to understand the expectations behind a BS in Computer Science degree and provide feedback on students’ plans to complete their degree programs.
Important Note: The chatbot is NOT intended to replace the required advising process, but rather to complement the work of human advisors by handling informational queries and providing students with preliminary planning tools. The advisors will remain responsible for reviewing student plans and actively supporting students as they identify concrete action items related to co-curricular activities and plans. The motivation is to streamline access to essential advising information and improve the efficiency of the advising process by focusing human advisor time on high-level guidance.
The envisioned solution is a bespoke, web-based advising chatbot that is trained on official NC State Computer Science undergraduate degree and policy documents.
Core Functionality Use Cases:
The solution will require:
End users (students) will benefit by gaining 24/7 access to accurate, up-to-date advising information for self-service planning. It will empower them to arrive at advising sessions with a greater understanding of their options, leading to more productive and personalized time with their advisor. Advisors will benefit by having fewer basic informational queries, allowing them to dedicate more time to complex advising, mentorship, and reviewing the plans students create using the chatbot.
A stretch goal is logging chat interactions so that the sponsor team can use the information to further refine advising materials for students.
The technology stack should be similar to the one used in CSC 326 to minimize new technology exploration and allow for the sponsors to support:
There will also be several extensions beyond the CSC 326 web stack
Additionally, there will need to be an exploration on the creation of a bespoke chatbot. The team will explore options and provide a recommended solution to the project sponsors for approval.
The sponsors will provide access to webpages and other resources that should inform creation of the chatbot’s underlying model.
Dr. Stevenson leads an Environmental Education (EE) Research Lab within the Parks, Recreation & Tourism Management Department at NC State. One of their projects partners with Duke University Marine Lab’s Ready Set, Resilience Project. Teachers from across the state are teaching about resilience through nature fables. My job is to run the research and evaluation arm.
The Fall 2025 Senior Design team launched development of a web-based platform to support Ready, Set, Resilience by enabling teachers to assign assessments and collect student evaluation data in a centralized system. The platform includes reusable assessment templates, workflows to set up teachers, classrooms, and students, authentication for all user groups, and initial data visualizations. The Fall team also produced a detailed report documenting the current system, technical decisions, and recommended next steps.
In Spring 2026, the focus will shift from initial build-out to real-world testing and refinement. The team will pilot key workflows with the NC State and Duke project staff and participating teachers, then iteratively improve the system based on stakeholder feedback, especially around usability, privacy expectations, and classroom practicality. The goal is to evolve the platform into something reliable and easy to maintain, while keeping it flexible enough to adapt as the Ready, Set, Resilience program and its assessments continue to develop.
What are your initial thoughts about a potential solution that addresses the problem presented above? Briefly describe your envisioned software solution to be developed by the student team. Use cases and examples are useful. Provide insight into how this potential solution will be beneficial to end users.
As with the Fall 2025 Senior Design team, the main purpose of the project is to provide a user-friendly, streamlined way for teachers and students involved in our project to upload evaluation and assessment data, visualize trends, and send that data to the project team for examination across classrooms and schools. A lot of this project will involve iterative design in response to stakeholder needs, which are largely still forming or may change; students interested in problem-solving with clients with real-world needs and collaborating to balance user experience with sustainable and achievable back-end design are a good fit.
The team should start by reviewing the Fall 2025 report for an understanding of where we started and ended up, as well as suggested next steps. The EE Lab can also provide guidance on prioritizing, clarifying, and potentially editing these next steps to fit current needs. Early in the process, we also anticipate meeting with groups of end-users or other stakeholders in the project; this may include school district personnel, who will be interested in data security and privacy; teachers, who will be concerned with functionality; and/or Ready, Set, Resilience program staff, who may have a variety of thoughts and feedback. We anticipate these meetings will present nuanced challenges or priorities.
Spring 2026 Deliverables (initial scope; refined with stakeholder feedback):
Over winter break, Dr. Stevenson met with a few teachers who would be end users, and those conversations raised a few ideas and questions that should provide good examples of the types of issues that may arise. For instance, one teacher raised concerns with using school email addresses for students, as the district has restrictions on external systems that link to those addresses; we have an email in with the district to inquire further. In addition, the RSR team has been working on a general assessment rubric we think will apply to lots of the RSR activities students complete; we could imagine focusing on a mobile/tablet-friendly version of this rubric that could integrate with the web-based system to allow teachers to quickly assess student work as they circulate around the room.
Technologies (used by the Fall 2025 Senior Design team):
Other constraints:
The Juntos Program (pronounced “Who-n-toes”), meaning “Together in Spanish, is dedicated to uniting community partners in order to equip students in grades 8 through 12, along with their families, with the knowledge, skills, and resources needed to ensure high school graduation and broaden post-secondary academic and career opportunities.
Launched in 2007, the program was born out of a survey conducted among Latino students and their families, revealing a critical need for a greater understanding of the educational system. Today, Juntos serves all students and families interested in enrolling, offering a comprehensive program that includes Family Engagement, 4-H clubs, Academic Success Coaching and an annual Summer Academy.
In recent years, Juntos has expanded to include workforce development initiatives, encouraging high school students to explore College and Career Pathways. The program’s success is made possible through the collaborative efforts of Extension’s 4-H and Family & Consumer Sciences (FCS) agents, K-12 school systems, post-secondary institutions, and community volunteers, creating a sustainable and impactful mode that continues to thrive in communities across the United States.
In North Carolina, the Juntos Program has expanded to approximately 21 school sites, each hosting weekly Juntos–4-H club meetings and periodic Family Engagement events. Participant attendance is currently documented using paper sign-in sheets on which students and family members write their names and signatures. Program coordinators at each school site scan these attendance sheets and upload them to the Juntos North Carolina State Leadership Drive. Juntos program assistants at the North Carolina State Office then manually enter participant information into a Microsoft Excel roster. This multi-step, paper-based process involves numerous handoffs before data coding is completed, resulting in inefficiencies, delays, and increased administrative burden for both site coordinators and program assistants.
Build an attendance and signature collection tool for the Juntos Program to replace paper sign-in sheets and reduce manual scanning and spreadsheet entry.
The Laboratory for Analytic Sciences, LAS, is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information.
Data labeling and data validation are important components of building AI and ML models. At LAS, we built a customizable data labeling application, called Infinitypool, to support the assessment of data. Yet building high quality data sets with human data labeling is time consuming and costly and we need to create innovations to drive down the time and cost to building high quality labeled datasets. This is especially important for the image and video domains where there can be lots of similarity between adjacent images or frames.
This project will build on the LAS labeling application Infinitypool to create efficiencies in labeling image data, especially image frames from video.
The current Infinitypool application is task based, meaning that each task (image) is labeled one at a time, and by a single person. LAS has expansive datasets of frame capture from videos that do not easily lend itself to the one image/one label approach. We are looking to develop a new way to both present tasks for labeling and present multiple images (potentially in a tile display) to allow for multiple images/labeled in a given “task” and then integrating these updates into existing workflows across multiple data modalities.
This work will include front-end UI development on an existing code base, API development, and back-end integration. We are currently using React for the front-end, the Infinitypool API service, developed using the Adonis.js framework, and we have Postgress for our database.
The team will have great freedom to explore, investigate, and design the labeling interface and interactions but will be subject to design and technology decisions in place with the existing Infinitypool application. However, any new methodology employed should not have any
restrictions (e.g. no enterprise licenses required). In general, we will need this technology to operate on commodity hardware and software environments, and only make use of technologies with permissive licenses (MIT, Apache 2.0, etc). Beyond these constraints, technology choices will generally be considered design decisions left to the student team. The LAS will provide the student team with access to AWS resources for development, testing and experimentation, including GPU availability for model training.
ALSO NOTE: Public distributions of research performed in conjunction with USG persons or groups are subject to pre-publication review by the USG. In the case of the LAS, typically this review process is performed with great expediency, is transparent to research partners, and is of little to no consequence to the students.
NetApp ONTAP Performance Engineering helps maintain and improve ONTAP system performance measured using various metrics on a suite of NetApp hardware products. For conducting these tests, Performance Engineering operates and maintains multiple performance labs.
Performance labs are a highly automated environment used by NetApp engineering community to submit performance tests using just a few clicks and are made up of islands of data driving clients and NetApp platforms connected on the same switch to avoid network hops in these performance-critical tests. While one lab is dedicated to automated release performance testing on a regular basis, another enables analysts to submit tests on demand, and a third lab helps optimize ONTAP builds for performance. NetApp invests millions of dollars in these performance labs. Due to the demand for some of the platforms on which these tests run, submitted tests typically ‘wait’ for some time before they start running on the platforms. While this ‘wait’ time can be more deterministic in a lab where tests are submitted in an automated fashion, in the labs where users submit tests on demand, this time can vary a lot and could depend on multiple factors including the demand for a platform. In such cases, it would be helpful if, for a test being submitted, one could get an estimate of the time the test would start and the time at which it would end. This ‘ETA’ tool is what this project aims for.
You are asked to build a tool that will generate the test start and end times, for a given test configuration, based on the historic data of the tests submitted on this platform, data belonging to tests with 'similar’ configuration, and other relevant information pertaining to the lab configuration. In addition to providing these estimates, the tool should also provide visualizations of the ‘wait’ times for a given test configuration and suggest alternative platforms on which this test could start sooner if it is feasible. This tool can be developed as a web-application, and ETA generation for a test configuration should also be made available via API querying.
You can assume that the configuration data for any given test is a JSON file with keys being various test parameters that are strings, and the values can be numeric or string or other data type. Of various keys present in this JSON file, some may be more important than others, in estimating these start and end times. Although the keys can be well-defined and not change from one test to another significantly, the same cannot be said about the values they take. Assume that the JSON can contain about 200 such keys. The number of such tests can run into a hundred thousand over the course of one year for each lab. The historic data of a test can be a collection of the following: submission time, wait time, start time, and end time. Lab configuration can include the platform on which the test is supposed to run, the type of data generating clients used, the switch on which this platform and data generating clients are connected, the number of such platforms available on this switch, the number of such data generating clients available on this switch, the number of such equipment that are free at the moment a test is submitted: some of this information can be found in the test configuration itself whereas the rest could be sourced from a different location. You can consider each of these fields as a feature, rather they are grouped by test configuration, historic data, and lab configuration. Based on this information, the team is expected to generate data that closely follows this schema for historic data, which is numeric. Lab configuration and test configuration is part numeric and part categorical.
You are required to work on three categories of models that estimate the start and end times: analytical, statistical, and neural networks based.
Note that not all data is useful. For example, in statistical modeling, you may want to identify key features that contribute to estimating the start and end times and improve the model to work on this limited number of features. In order to identify those features, you may do correlation analysis, principal component analysis, and other dimensionality reduction techniques. You may also want to try tree-based methods as they are known to work well with structured data.
In case of neural networks-based models, you may try those that help with time series, such as RNN, LSTM, or an SLM. You may also try other models if you are convinced they would help with the prediction.
For analytical models, you may wish to start with M/M/1 queues and then move to more complex models that better represent the lab configuration.
Team will work on the following in this project
A tool that provides start and end times for a test configuration via an API query and a web-interface that helps visualize relevant historical data for that test configuration. At least one statistical model and one neural networks-based model that works satisfactorily on the data generated by the team following the schema and other information provided earlier. This project is aimed to make the team work on estimation methods, API development, and some full-stack development. Analytical modeling and a more complex web-interface that lets user provide input are additional, but the team is strongly encouraged to plan appropriately and work on those as well.
Although some core principles are provided, the project is purposefully open-ended so that the team gets a chance to think through the details, discuss amongst each other, communicate their thoughts to the sponsor, receive feedback, iterate, develop, explore, and learn along the course of the project.
Don created ResumeFab to help people create resumes tailored to each specific job listing.
He has been a serial entrepreneur who has grown companies from 0 to hundreds of employees multiple times. He’s looked at thousands of resumes and has hired hundreds of people. He has experienced the frustration of having to translate what is on a resume to see why the applicant thinks they are a fit for the job.
Many resumes are created to present general skills and experiences. As a result, job seekers must summarize their education, work history, and accomplishments without knowing which parts will matter most to a particular employer. Employers and recruiters then spend time translating a broad resume into an assessment of whether the applicant truly fits the role.
ResumeFab addresses this mismatch by retaining a structured database of a user’s background—such as education, projects, employment (with titles), specific roles, completed tasks, and even hobbies. Using this information, AI can generate a new resume for each job listing that emphasizes the experiences and skills requested in that listing. The goal is to enable job seekers to create a bespoke resume for every job listing they pursue, rather than repeatedly editing a single generic resume.
ResumeFab is currently running as a free application, but it lacks administrative tools that provide visibility into who is using the product and how it is being used. The sponsor wants a dashboard that allows administrators to view Key Performance Indicators with user details, and to control selected features within the software.
This need becomes especially important as ResumeFab moves toward charging users. The company has a Stripe account, and the codebase already includes integration with the Stripe API to execute transactions, but the sponsor does not want to enable billing without basic monitoring and controls. A dashboard that provides billing visibility, usage tracking, and safe administrative control over charging behavior is required before turning on payments.
A Fall 2025 Senior Design student team worked on the first iteration of ResumeFab (https://resumefab.com/) . They built a full-stack web app that lets users upload and parse an existing resume, enter a job description, and generate a tailored resume using an OpenAI-driven prompt pipeline (including markdown-structured prompts, one-shot prompting, and a multi-step verification approach). They also implemented core user features like multiple resume styles, a resume library/history with download/export, and a skill-mapping analysis against the job description, plus supporting backend APIs and testing infrastructure.
For Spring 2026, the student team will expand the existing basic administrator dashboard into a more robust, easier-to-use Control Panel that is responsive on both desktop and mobile devices. This Control Panel will allow ResumeFab administrators to view, and in some cases control, real-time and historical performance, billing, and usage data for clients.
The project has two primary technical focuses:
SAS provides technology that is used around the world to transform data into intelligence. A key component of SAS technology is providing access to good, clean, curated data. The SAS Data Management business unit is responsible for helping users create standard, repeatable methods for integrating, improving, and enriching data. This project is being sponsored by the SAS Data Management business unit in order to help users better leverage their data assets.
An increasingly prevalent and accelerating problem for businesses is dealing with the vast amount of information being collected. Combined with lacking data governance, enterprises are faced with conflicting use of domain-specific terminology, varying levels of data quality/trustworthiness, and fragmented access. Lack of timely and accurate data reporting may end up driving poor decisions, operational inefficiencies, and financial losses. This is in addition to the exposure of businesses to regulatory penalties, compliance failures, and reputational damage and ultimately putting businesses at a competitive disadvantage.
To help address the underlying issues related to managing their data, a business may buy, or build, a data governance solution that allows them to holistically identify and govern an enterprise's data assets. SAS has developed a data catalog product which enables customers to inventory the assets within their SAS Viya ecosystem. The product also allows users to discover various assets, explore metadata, and visualize how the assets are used throughout the platform.
In past semesters, SAS student teams have explored different avenues and extensions to a data catalog such as operationalizing governance and creating accessible components to visualize metadata. In this project, we'd like to focus instead on the discovery and, primarily, exploration phase. A data catalog provides a curated view of the metadata that is stored within the environment. This is with the intention to provide easy to understand dashboards and visualizations, about specific types of metadata, to a wide-spread audience. When displaying cataloged assets to an end user, a view of the metadata for a specific context or user persona is presented. The view will hide some of the metadata and the underlying complexity.
And this is where we get to one of the weaknesses of visualizing a data catalog. The catalog provides a metadata system to support storing any type of metadata object, but displaying that information to end users (outside of just an API) in a useful way is difficult without an understanding of the context (including the business, domain, and user personas).
What the sponsor would like to investigate in this project is a metadata explorer. An interface that gives the user unfiltered access to all the metadata in the environment (that they are authorized to view). If the catalog is akin to a traditional, physical library card system then the explorer is akin to walking through the bookstacks or shelves. An explorer opens up the possibilities for end users. If we're already providing an interface to view the metadata, then why not also be able to edit it? Or for example, once a user understands the underlying metadata then they could create a customized dashboard.
On startup, the application must ingest/load a pre-defined set of metadata (in a JSON format). An initial set of metadata will be provided by the sponsors as well as a script for the generation of synthetic metadata. All metadata provided will conform to the Open Metadata schema (https://docs.open-metadata.org/v1.4.x/main-concepts/metadata-standard/schemas).
The application must provide an API to perform create, read, update, and delete (CRUD) operations upon metadata in the system.
The application must provide an easy-to-use, performant interface for users to explore the metadata available in system. The students should brainstorm various approaches or UI patterns that could be used to allow users to visualize the metadata.
The application should provide the following functionality:
Out of Scope
The following are out of scope for this project:
Teen Health Research (THR) inc. is a startup dedicated to providing a program for parents and children ages 10 to 19 to inform and facilitate communication related to health and well-being. THR has developed an interactive web app for the program.
Journaling has been a popular way to manage communication and mental health issues. The Teen Health Research platform includes a collaborative journal for teens to reflect on their challenges, excitements, and feelings and have a conversation with their parents who have their own journal to reflect their journey of relationships with their children.
The main innovation in this project will be to develop guided journaling support with AI to engage users to do responsible recording of their experiences while in full control of how much and in what form parts of their experiences are shared with their family on either side.
The envisioned software solution will be a personal journaling web-app with the following features:
Web-based and (as a stretch) mobile app. The conversation aspect can be prototyped as a Discord bot. To ease the transition to the main app framework for the existing Let’s Talk app
Let’s talk app is deployed using Heroku and uses a MongoDB Atlas database.
Small to medium models running on our VCL machines with api access should be sufficient as a starting point.
The ARNAV Lab is Dr. Jhala’s research group. They investigate computational structures and methods that are useful in representing and mediating human interpretation and communication of narrative in interactive visual media, such as film and games. The Jhala research group uses symbolic and probabilistic tools to represent and construct coherent visual discourse and apply generative techniques for automated and semi-automated tools to interpret and collaboratively create visual narratives.
The ARNAV Lab is creating a platform for simulating conversations between groups of people that have rich personalities, experiences, opinions, and expressions. While an original project was developed using the Unity Game engine, the visual management of character sprites and user interaction pieces are challenging to maintain and run user evaluations for due to the constant updates to the game engine. We are looking for a lightweight way to focus our study on the conversation aspect of the AI agents to simulate different types of community interactions
Inspired by some of the Discord servers that have bots that are designed to help a Game Master run DnD simulations, we want to develop communities of AI agents and human participants to do rich simulations of communities and conversations over time.
In this project, we will be developing a framework that integrates with Discord to include:
The framework will interact with Discord via its API using a well-supported client (e.g., Discord.js). Students can propose the language and paradigm for the framework itself, but a web-based platform is suggested. Use of an LLM and likely an image model will be necessary.
Bandwidth is a software company focused on communications. Bandwidth’s platform is behind many of the communications you interact with every day. Calling mom on the way into work? Hopping on a conference call with your team from the beach? Booking a hair appointment via text? Our APIs, built on top of our nationwide network, make it easy for our innovative customers to serve up the technology that powers your life.
Any organization working with personal or confidential data requires tools that can remove sensitive information safely and accurately. Manual redaction processes are difficult to scale and can lead to errors. Bandwidth has an opportunity to provide automated, privacy-first tooling that aligns with our trust and compliance commitments.
The AI-Redaction Service is a tool designed to automatically detect and remove sensitive information—such as phone numbers, emails, dates(ex. DOB), credit card numbers, and other Personally Identifiable Information (PII)—from call transcripts or audio. It enhances privacy and compliance for customers using Bandwidth’s call recording and transcription features. Students will build a text-based redaction MVP, with optional audio enhancements as stretch goals.
This request is part of an ongoing project within the Civil, Construction and Environmental Engineering (CCEE) Department, sponsored by the Alaska Department of Transportation and Public Facilities (AKDOT&PF). As the most seismically active state in the United States, Alaska faces unique infrastructure challenges. AKDOT&PF has been supporting NC State research focused on enhancing the seismic safety of bridges for over 20 years.
After a damaging earthquake, it is critical to quickly determine the status of civil infrastructure, such as highway bridges. This helps state agencies make informed decisions, avoid unnecessary risks, and reduce potential losses. In seismic regions, bridges play a vital role after an earthquake by serving as lifelines, providing access for emergency vehicles and helping reconnect isolated communities.
However, assessing the condition of dozens or even hundreds of bridges immediately after an earthquake is a challenge, especially in states where bridges are spread across vast and remote areas, which is the case of Alaska. Figure 1 shows the transportation network (bridges as circles) overlayed with the intensity of the seismic hazard (darker colors represent a bigger hazard). Sending engineers to every bridge site for inspections can take days or weeks, time that might delay critical emergency response efforts.
The focus of our research project is to develop a rapid and practical method to evaluate the post-earthquake performance of bridges. This project also looks beyond post-earthquake response. The same type of analysis can be used to run scenarios before an earthquake happens to identify vulnerable bridges in advance, improving emergency planning and even informing better design choices.

Figure 1. Location of AKDOT&PF bridges and the identified hazard level for different regions in Alaska.
A Computer Science Senior Design team in Fall 2025 developed a full-stack web application designed to assess bridge structural integrity in response to earthquake events. The system fetches real-time earthquake data from USGS, performs structural assessments on bridges using scientific models, and provides a web interface for monitoring and management.
Key Features:
While Phase I of Rapid Post-Earthquake Assessment Tool enabled the completion of key components, the following tasks (Phase II) are recommended to further develop the system and prepare it for use by engineers in post-earthquake decision making.
The items listed below represent a comprehensive wish list rather than a fixed scope. Same tasks are expected to be relatively straightforward to implement, while others will require more significant development effort. They vary in complexity and level of effort, and priorities can be discussed and established with the student team at the start of the project.
Technologies to use are based on what Phase I of the project seemed appropriate and what Phase II may require for improvements. Recommendations stemming from the completion of Phase I of the project include becoming familiar with the available User Guide, Developer’s Guide, and Installation Guide to see the details of Phase I.
Resources used during Phase I:
Decidio uses Cognitive Science and AI to help people make better decisions and to feel better about the decisions they make. We plan to strategically launch Decidio into a small network fanbase, then grow it deliberately, methodically and through analytics into a strong and accelerating network.
Most landing pages explain. We want to create one that invites interaction — and rewards discovery.
Decidio is developing a decision-intelligence platform that helps people make better choices with clarity and confidence. As awareness grows, the goal is for the brand to teach itself — not through long paragraphs, but through a playful, visual experience people can’t resist touching.
Students on this project will build a dynamic, game-like visualization — based on the Decidio logo system — that responds to user actions, hides solvable logic beneath the surface, and quietly converts curiosity into early-access signups. Visitors don’t just see Decidio. They explore it, experiment with it, solve it — and talk about it.
This isn’t meant to be “a game.” It’s an interactive brand teaser that communicates value through interaction — while generating measurable early-engagement signals. It blends design, psychology, product strategy, and engineering — exactly the kind of challenge that prepares a senior team for real-world environments.
When players finally “get it,” they unlock an invitation-only early access. The result is:
We would like all these interactions to be instrumented so that it becomes a living experiment, allowing A/B testing of rules, difficulty, and feedback loops.
Beyond the numbers, the qualitative goal is simple: People should leave thinking, “That was fun — and this company thinks differently.”
The experience is intentionally mysterious — and social. “I figured out how to unlock it — can you?” That creates natural sharing behavior:
Over time, rewards can expand to include vendor perks or feature privileges — deepening viral loops without changing the mechanic.
A companion mobile experience mirrors the same system:
This pushes the team to think cross-platform — while staying scoped and realistic.
Typescript, JavaScript, CSS, HTML, Node.js using React and possibly React-Native frameworks (for the mobile app). Auto-layout uses physics-based force-directed mechanics. Students are encouraged to explore D3.js-based libraries.
Front-end / Interaction
Rules & Game Logic
Backend Services
Analytics & Experimentation
Mobile Companion (Stretch Goal)
Ethics & Accessibility
The project naturally splits into phases, minimizing risk and creating clear milestone reviews.
FreeFlow Networks is a startup pursuing innovative technologies for potential commercialization. One current project is Whisker Wings, a flight-based game centered on energy management, environmental interaction, and skill-driven play. This project supports the long-term scalability and quality of content creation for the game.
Physics-based flight games present unique challenges in level design. Unlike traditional platformers or action games, flight games have to obey strict physical constraints such as turn radius, reaction time, speed envelopes, and other limitations. Even small level design errors can result in levels that are technically impossible, unfair, or frustrating to play.
Hand-crafting large numbers of high-quality flight levels is time-consuming and is prone to human error. Yet the expectation of mobile game consumers today is long play time across many bite-size levels. As a result, a small game development team may struggle to scale level creation to meet consumer demand without sacrificing quality.
Fully procedural level generation, while it sounds good in theory, can produce level layouts that are boring (not fun), violate needed game constraints or require extensive correction from a human designer.
The motivation for this project is to create a designer-controlled, constraint-driven level authoring tool that speeds up level creation while guaranteeing the level is playable. The system should use specific flight level design knowledge and place it directly into the tool, allowing a designer to generate and iterate on levels more efficiently and without fear of common level design failures.
The proposed solution is an in-engine level authoring tool that uses procedural assistance constrained by flight-specific rules. Instead of replacing human designers, the tool assists designers by generating level layouts that already respect known limitations/constraints.
Designers will specify their intent for the level such as aircraft class, difficulty, skill focus, which obstacles/enemies, and the amount of them. Based on these inputs, the system will generate a bounded flight corridor, place checkpoints and obstacles, other challenging events, and validate the resulting layout against a set of hard constraints. If constraints are violated, the system must clearly report the failure or regenerate the level.
Example use cases include:
The resulting tool will significantly reduce iteration time while improving overall level quality.
Existing Environment:
Technical Constraints:
The Laboratory for Analytic Sciences, LAS, is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information.
The idea is to create an AI agent based framework to play the game Diplomacy in the spirit of the following ArXiv paper: https://arxiv.org/abs/
Dr. DK Xu is an Assistant Professor in the Department of Computer Science at North Carolina State University. His research focuses on AI system design, large language models (LLMs), multimodal reasoning, and agentic AI systems for scientific and engineering applications.
He is joined by Dr. J. Paul Liu, a Professor in NC State’s Department of Marine, Earth and Atmospheric Sciences (MEAS) and Director of International Affairs in the College of Sciences, whose expertise in coastal and marine geology, including continental shelf sedimentation, seismic profiling, and sea-level rise, provides strong domain grounding for ocean and coastal data workflows.
Dr. Ruoying (Roy) He is a Goodnight Innovation Distinguished Professor in MEAS and leads the Ocean Observing and Modeling Group; his expertise in physical oceanography, ocean observing systems, and numerical modeling closely aligns with the scientific use cases and validation needs of an AI-enabled ocean science assistant.
The project is further supported by graduate student mentor Bowen (Berwin) Chen, who will provide hands-on technical guidance on system integration, backend implementation, and development best practices throughout the project.
The OceanVoice project builds on the development of Ocean AI (https://oceanai.ai4ocean.xyz/), an LLM-powered AI science assistant designed to support oceanographic data exploration, analysis, and visualization. Ocean AI can process multimodal inputs and outputs, including text, tables, numerical datasets, and plots, to assist researchers, students, and domain users in scientific workflows. This senior design project will extend Ocean AI with a voice-first interaction layer, enabling users to interact with the platform through spoken queries and receive responses through both voice and on-screen multimodal outputs. The project emphasizes robust system integration, task execution, and human-in-the-loop interaction rather than model training.
The current Ocean AI platform provides advanced capabilities for text-based data retrieval, analysis, and reasoning. However, interaction is limited to typed input, which introduces friction in many realistic scientific workflows such as exploratory analysis, collaborative discussions, teaching demonstrations, and hands-busy or mobile environments.
Spoken input offers a more natural interface but introduces new system challenges:
Without explicit system support for grounding spoken input, executing tasks safely, and interacting with users to resolve ambiguity, a voice interface can easily lead to incorrect or misleading results. For a next-generation multimodal LLM-powered ocean science assistant, supporting voice interaction is not simply a UI feature; it requires careful design of intent understanding, controlled task execution, and transparent output presentation. Addressing these challenges will significantly improve the usability, robustness, and real-world applicability of the Ocean AI platform.
The OceanVoice senior design team will develop a voice-enabled interaction module that can be integrated into a simplified version of the existing Ocean AI platform. The system will allow users to issue spoken scientific queries, execute corresponding data retrieval or analysis tasks, and receive results through combined voice and visual outputs. To ensure feasibility and clarity, the project is structured around three progressively advanced capability levels.
The platform will be delivered as a web application integrated with a simplified Ocean AI backend, emphasizing system correctness, usability, and transparency.
To clarify the scope of the system and guide implementation and testing, the project defines a small set of representative voice interaction scenarios associated with each capability level. These scenarios serve as concrete targets for system design and will also be used as structured demo cases during the final presentation.
Level 1: Voice Input to Single-Task Execution
Level 1 focuses on reliable speech-to-intent grounding and execution of a single scientific task.
Level 2: Simple Multi-Step Task Execution
Level 2 extends the system to support spoken queries that imply a short sequence of actions executed in a fixed order.
Level 3: Clarification and Confirmation for Robust Interaction
Level 3 introduces basic interaction awareness, allowing the system to engage users when information is missing or when an action requires explicit confirmation.
Constraints include ensuring system robustness, avoiding unintended task execution, and maintaining clear separation between user input, system decisions, and final outputs.
Teen Health Research (THR) inc. is a startup dedicated to providing a program for parents and children ages 10 to 19 to inform and facilitate communication related to health and well-being. THR has developed an interactive web app for the program. The “program” is a theoretically sound and empirically evaluated framework of a series of modules related to topics such as dealing with curiosity about bodies and changes to them, hygiene, and norms and expectations for relationships and healthy behavior. The app allows parents and children to create profiles and walks them through activities and informational materials step-by-step through the modules of the program.
[IMAGE]
To take Let’s Talk app from prototype to deployed version with the potential to support 10s of families to smoothly use the platform and add premium features. The basic app is running and has been tested with a small number of focus group users. It is currently in closed beta.
The objectives for this semester’s project are:
The Node.js-based app is deployed on Heroku. The content is drawn from Storyblok CMS via API. Storyblok has custom modules for content development. MongoDB database contains app data such as user profiles. Current app is available at http://go.lets-talk-app.com
http://lets-talk-families.com provides an overview of the program and app.
Dr. Card is a teaching professor in the Computer Science department at North Carolina State University whose teaching focuses on game design and development courses in the game development concentration of the Computer Science degree.
Video game design and development is a nexus point of various disciplines, with individuals of various backgrounds from different fields combining their talents and expertise to create a world and interactive experience. With the growing popularity of digital games as a form of entertainment, an art form, and teaching tool, the need to educate students in game design has grown. Students from varying disciplines and fields of study take game design courses, meaning instructors cannot rely on students prepared by a single disciplinary foundation (such as Design or Computer Science). This increase in the number and diversity of students, especially those not in programming related disciplines, necessitates an improvement in the available support tools for students in computer science-based game design courses.
Currently, one tool used to teach game design is Puzzlescript, which has not been in active development for several years, and has some idiosyncratic behavior. This project aims to introduce a new rule-replacement programming tool similar to PuzzleScript with additional features which are not present in the current PuzzleScript implementation.
The tool should permit students to create 2d games using a rule replacement language similar to PuzzleScript. The tool should include a code editor with Syntax highlighting, an execution window to run the game, and the ability to compile and package a build of the game to be played elsewhere.
There are a few requirements on what the new tool should include:
The tool would be expected to be usable in the classroom environment for teaching and game creation purposes.
The students may propose various technologies to use in facilitating the creation of the tool. The end tool should align with NC State privacy standards, and should not store any remote information from a student.
Decidio uses Cognitive Science and AI to help people make better decisions and to feel better about the decisions they make. We plan to strategically launch Decidio into a small network fanbase, then grow it deliberately, methodically and through analytics into a strong and accelerating network.
When users are making a sequence of decisions to purchase a collection of products like their wardrobe or fixtures when they are remodeling their home then each individual decision is not independent in terms of visual and structural features. For example, while choosing light fixtures for a kitchen, the recessed lighting, pendant lighting, dining table lighting, and studio lights for wall art are all chosen with the overall theme of materials, sizing, style, form, color, etc. Users making these choices get overwhelmed with such decisions because each one must not only match the overall theme but also to work well with other choices that are made. Sometimes there is an inversion of preferences as well. For example, one might not like red colored cars in general, but they really like red color on a particular car (such as a Mustang or Corvette). In this case, there is an exception to their preference for a single item from the collection.
With an innovative visual interface that allows users to navigate their preferences by switching between left brain and right brain interactions (photos vs tables of numbers), Decidio seeks to make the process of discovering, modeling, and interacting with user preferences more pleasurable.
The preference learning project aims to develop a machine learning model that models preferences in both the visual domain (pictures of items) and in the features domain (tables of numbers with feature labels). For this project, we want to build an initial database of products with their features and their photos. Then from the app interface, users will search or select products from either a gallery of images that appear while swiping right or the collection of features that appears while swiping left. Each user will have a profile with a named collection of products that they can add to a list of related products. Users indicate their preferences by reordering images or selecting features to narrow down suggested products. They can add their selected products to lists.
Expected features of the app:
There will be two machine learning models that will get trained from data collected from this interface:
There are no technical constraints on this project.
Dr. Renee Harrington is an Associate Teaching Professor and Director of Undergraduate Programs in the Department of Health and Exercise Studies at North Carolina State University. Her teaching and research focus on various facets of health and wellness including nutrition, resilience, stress management, and physical activity promotion.
At NC State University, as with many other college campuses, students commonly struggle with poor eating habits that contribute to fatigue, stress, reduced academic performance, and long-term health risks. Traditional nutrition education can feel overwhelming, overly technical, and disconnected from students’ everyday routines, resulting in low engagement and limited behavior change.
Although many students want to improve their eating habits, they often lack a low-stakes, positive, and judgment-free way to explore nutrition concepts. Without engaging, hands-on learning opportunities, students have few avenues to build confidence and practical skills around balanced eating. This gap underscores the need for an approach that not only teaches nutrition principles but also integrates guidance seamlessly into students' everyday campus routines.
To address these challenges, students need a dynamic, personalized, and student-centered learning experience, one that allows them to experiment, make choices, and receive feedback in a supportive, risk-free environment. A platform that frames nutrition as approachable and enjoyable has strong potential to boost awareness, motivation, and day-to-day healthy decision-making. An enhanced system that incorporates personalized feedback and immersive features such as guided decision-making, avatar-based exploration, or real-time campus-specific prompts could meaningfully improve students’ ability to adopt and maintain healthier eating habits.
This project will expand upon the work initiated during the Fall 2025 semester, which established four narrative-based scenarios focused on making healthy, balanced choices in NC State dining halls. Building on this foundation, the team will further develop an interactive, game-based platform designed to improve nutrition literacy and healthy decision-making among NC State students.
The platform will engage players through scenario-based and other challenges that reflect real eating situations on and near campus. By making food choices, completing tasks, and observing the outcomes of their decisions in a risk-free environment, students will gain practical knowledge about balanced nutrition and how to apply it in daily life. Real NC State dining locations, meal options, and student preferences will be integrated to create a personalized learning experience. Through puzzles, mini-quests, and branching decision paths, users will receive immediate feedback reinforcing core nutrition concepts such as portion balance, nutrient density, and long-term effects of dietary habits.
A key strength of this project is its alignment with NC State’s Health and Exercise Studies GEP, which requires every undergraduate to complete a 100-level HES course. This platform will be implemented directly within these required courses, embedding the game into the nutrition module as an experiential learning tool. Faculty will incorporate game data such as choices made or modules completed into class discussions and/or reflection assignments. As a result, the product will positively impact more than 3,000 students each semester, ensuring broach reach and meaningful influence on student well-being.
Future enhancements may include more immersive features such as customizable avatars, campus-navigation gameplay, or adaptive nutrition coaching based on individual decision patterns. These additions would deepen engagement and strengthen the connection between virtual learning and real-world behavior.
Benefits to End Users:
Students working on this project do not need prior nutrition knowledge or expertise; all subject-matter content will be provided. The team has flexibility in selecting technologies and development paradigms, as long as the final product is user-friendly, accessible, and feasible within the semester timeframe. The platform should be designed with sustainability in mind, favoring technologies that are low-cost to maintain and compatible with university-supported hosting environments.
It is preferred that the tool has the following:
The platform should be developed as an open-source simulation tool that initially supports the nutrition module in HES courses but is designed for scalability. Its modular architecture will allow additional scenarios, immersive features such as avatars, campus navigation, or adaptive nutrition coaching, and potential adoption by faculty in other departments or institutions with minimal redevelopment. This ensures the tool can reach thousands of students at NC State and potentially many more across diverse campuses.
Impartial is a criminal justice nonprofit. We exist to build meaningful human connections and programs that improve the criminal justice system through personal and community-driven engagement. Impartial believes that one of the ways to do that is by engaging the future justice leaders in games that can help them to better understand what the US justice system, what role they could play in it and most importantly, what the system could be by using gaming to understand possibilities.
Impartial has built nine criminal justice video games to date: Investigation, Grand Jury, Plea Deals, Motions to Dismiss, Jury Selection, Prosecution, The Defense, Jury Deliberation, and Sentencing. Your challenge is to develop the tenth and final game in the Justice Well-Played series: Post-Verdict. Seven games have been developed through the NCSU Capstone project, creating valuable assets we can share across games. For consistency and efficiency, we're using the same characters, names, and scenes throughout the series. Post-Verdict concerns the post-verdict appeals and sentencing path in a federal criminal case. Another Senior Design team will be working to consolidate previous titles in the series together into a cohesive whole while you are working on this game.
Below is an outline for the Post-Verdict game. There are three possible outcomes going into the game:
After the verdict has been issued, the player is then presented with Post-Conviction Defense Options:
Motion for Judgment of Acquittal / Motion to Overturn Verdict: This motion faces a steep standard, as the judge must view all evidence "in the light most favorable to the prosecution" and can only overturn if no reasonable jury could have reached a guilty verdict. Defense must identify compelling legal errors or evidentiary insufficiencies that undermine the conviction.
Motion for New Trial: Often requested simultaneously, based on judicial errors during trial (improper jury instructions, evidentiary rulings), prosecutorial misconduct, newly discovered evidence, or ineffective assistance of counsel. Provides the judge a middle-ground option. If Judge Grants an Acquittal and/or New Trial, the Prosecution has 30 days to appeal to the 4th Circuit Court of Appeals.
During a 4th Circuit Appellate Hearing, each side presents for 20 minutes to a three-judge panel. No new evidence—only legal arguments based on the trial record.
Possible 4th Circuit Rulings:
If conviction stands after appeals, sentencing is mandatory. First, there is a Pre-Sentence Investigation and Pre-Sentence Report which covers prior criminal history, personal background, family situation, substance abuse and mental health history, employment and education, financial status, and victim impact statements.
Defense Preparation considers character reference letters, psychological evaluations, rehabilitation plans, evidence of family responsibilities, post-release employment and housing plans.
Finally, Sentencing Considerations include advisory guidelines calculations, mandatory minimums (if applicable), offense severity and criminal history category, victim harm (physical, emotional, financial), restitution amounts, and Bureau of Prisons placement recommendations (security level, geographic proximity to family, medical needs, specialized programs).
Many of the previous games have been implemented using Ren’Py. Any other technology that you think would be helpful for the best interest of the game should be considered.
Impartial is a criminal justice nonprofit. We exist to build meaningful human connections and programs that improve the criminal justice system through personal and community-driven engagement. Impartial believes that one of the ways to do that is by engaging the future justice leaders in games that can help them to better understand what the US justice system, what role they could play in it and most importantly, what the system could be by using gaming to understand possibilities.
Impartial has built nine criminal justice video games to date related to a real case: Investigation, Grand Jury, Plea Deals, Motions to Dismiss, Jury Selection, Prosecution, The Defense, Jury Deliberation, and Sentencing. These games have been developed to provide a picture into the workings of the criminal justice system in the United States, and each correspond to parts of a real trial. Seven games have been developed through the NCSU Capstone project, creating valuable assets we can share across games. For consistency and efficiency, we're using the same characters, names, and scenes throughout the series.
This semester there is a tenth and final entry in the series: Justice Well-Played: Post-Verdict, which concerns the post-verdict appeals and sentencing path in a federal criminal case and will be developed by another Senior Design Games team concurrent with this project. While these games have a shared case, each game has been standalone, and choices made in each game do not carry over to the other games. This project aims to combine the games into a single experience where choices and narrative will carry throughout the games.
Multiple student teams over the past several years have developed separate games within the same narrative of a single court case. These games have required the players to make choices as they play the games; however, as each game was developed separately, the choices made in each game did not carry to any future games. This project aims to combine those previous games into a single coherent experience, where the choices made throughout the games affect future game states, and add additional polish to the game as a whole.
Students working on this project will be incorporating each individual game into the larger, connected narrative spanning the entire case, maintaining consistency in:
To optimize the gameplay experience, the team will also consider the following:
Multiple phases of playtesting should be performed to provide feedback on the game, allowing for the team to polish the connected narratives and ensure accuracy in the representation of criminal justice elements.
Previous games have used Ren’Py. Any other technology that you think would be helpful for the best interest of the game should also be considered.
Katabasis is a non-profit organization that specializes in developing educational software for children ages 8-15. Our mission is to facilitate learning, inspire curiosity, and catalyze growth in every member of our community by building a digital learning ecosystem that adapts to the individual, fosters collaboration, and cultivates a mindset of growth and reflection.
The difference between a student who thrives in computer science and one who grows to hate it often comes down to their debugging experiences. Until recently, debugging was not an explicit focus of computer science classes; instead, it was assumed students would internalize debugging skills through the practice of programming. Debugging requires students to coordinate multiple skills, such as code reading, writing, tracing, and testing—a finding which has been reinforced in various research studies conducted within the context of text-based programming environments (Fitzgerald et at, 2005; Adelson & Soloway, 1985; Vainio & Sajaniem, 2007; Guzdial, 2015; Spohrer & Soloway, 1986). Additionally, McCauley and colleagues (2008) noted in their comprehensive review of debugging research that proficient debuggers should have knowledge of the intended and current program, an understanding of the programming language, general programming expertise, and knowledge of bugs, debugging methods, and the application domain (McCauley et al., 2008). Despite increasing recognition of the importance of debugging, there remain surprisingly few studies that explicitly teach debugging skills, and even less in K-12 settings. Moreover, it’s unclear how the findings and strategies developed from these studies apply to block-based programming or hybrid environments (Kafai et al., 2020), a format which is increasingly used to teach programming to beginner programming students.
A major challenge is that debugging involves several invisible cognitive processes—such as interpreting code, tracing execution, evaluating program behavior, and debugging bugs—that novices cannot easily observe or learn from. Without tools that make these processes explicit, structured, and visual, young learners struggle to develop effective debugging strategies and lose confidence in their ability to understand code.
This project addresses that problem by creating an interactive block-based programming experience that visualizes the core components of debugging: (1) code reading, (2) code tracing, and (3) debugging. By decomposing these tasks and designing clear interaction flows, scaffolded steps, and visual/audio cues, the student team will develop a system that makes expert debugging strategies visible, helping novices build stronger problem-solving skills and a deeper understanding of programming.
You will design a series of interfaces that demonstrate and support at least three different debugging activities in a block-based coding environment: (1) code reading, or the ability to look at code and interpret its purpose and functionality, (2) code tracing, the ability to walk through lines of code individually to understand what is happening in sequence, and (3) debugging, the ability to find and fix the bug/error. Your team will decompose each of these tasks to design the experience, including the sequence of interactions, presentation of information, and useful visual/audio cues.
Your interfaces should feature clean, visually pleasant design and should adhere to common usability design principles, keeping an elementary school target audience in mind. The team may also receive feedback from various stakeholders over the course of the semester, which will influence the project’s design.
The interfaces you implement should be well-designed and built with extensibility in mind; this project should be easily compatible with other block-based programming environments created in Unity. To demonstrate the efficacy of your design during live demos, you will integrate your project with Agricoding, an existing block-based programming game and farming simulator created by Katabasis, in which players use code to control a virtual drone that tills soil and plants, waters, and harvests crops on a virtual field.
This project will be created in Unity to ensure compatibility with Agricoding and other block-based programming environments created in Unity. The version of Agricoding which integrates your project must be built to WebGL for interactive demos. Prior experience with Unity or other game engines is preferred.
The North Carolina Museum of Natural Sciences (NCMNS) is a natural history museum in Raleigh, North Carolina. The museum is the oldest in the state, and the largest natural history museum in the Southeastern United States. The Paleontology and Geology Lab is one of the museum’s Nature Research Center's five research labs within the museum's Research and Collections department.
Cretaceous Creatures (https://cretaceouscreatures.org/) is a public science project at the NCMNSNC Museum of Natural Sciences that allows school children to look for and record fossils. There is currently a running exhibit of Dueling Dinosaurs that shows two almost complete dinosaurs that were found in Utah and obtained by the museum.
Students are sent boxes of dirt and sediment from an excavation site with tools for finding and recording small fossils that they find in the sediment. They have a google form for recording their classification of the fossil. There is a database of all specimens with information about them and at the category-level 3D models created by photogrammetry of relevant specimens. Currently the website is hosted in Wordpress and the specimens are recorded in excel files with information collected through google forms. The web framework is static and there isn’t an elegant way to explore the data in the database.
This project has 3 thrusts:
It will be ideal for quick deployment if much of the work can be done through Wordpress plugins and scripts but we can discuss whether this will be too limiting for scalability. If we do choose a different framework then there might be issues with getting it approved by the Museum and NC State OIT for future management effort. This is the trade-off that we will have to address in choosing the technology solution.
Dr. Srougi is an associate professor (NCSU Biotechnology Program/Dept of Molecular Biomedical Sciences) whose research interests are to enhance STEM laboratory skills training through use of innovative pedagogical strategies. Most recently, she has worked with a team to develop an interactive, immersive and accessible virtual simulation to aid in the development of student competencies in modern molecular biotechnology laboratory techniques.
Biopharmaceutical manufacturing requires specialized expertise, both to design and implement processes that are compliant with good manufacturing practice (GMP). Design and execution of these processes, therefore, requires that the current and future biopharmaceutical workforce understands the fundamentals of both molecular biology and biotechnology. While there is significant value in teaching lab techniques in a hands-on environment, the necessary lab infrastructure is not always available to students. Moreover, it is clear that while online learning works well for conceptual knowledge, there are still challenges on how to best convey traditional ‘hands-on’ skills to a virtual workforce to support current and future biotechnology requirements. The need for highly skilled employees in these areas is only increasing. Therefore, to address current and future needs, we seek to develop virtual reality minigames of key laboratory and biotechnology skills geared towards workforce training for both students and professionals.
The project team has previously created an interactive browser based simulation in a key biotechnology laboratory skill set: sterile cell culture techniques. This learning tool is geared towards university students and professionals. In the proposed project, we intend to develop 2 virtual reality minigames using the Unity game engine to reinforce the fundamental skills required to perform more advanced laboratory procedures that are represented in the simulation. The game interactions occur through the Meta Quest 3 VR system. This project will be a Phase II of a previous senior design project. The refinement for the development of one minigame (i.e. use of a pipet aid, see below) and one prototype biohaptic device was accomplished. This current project proposal will seek to focus on the pipet aid minigame being ready to deliver to users in a classroom setting. Moreover, the project will move forward with the development of a separate minigame dedicated to the use of micropipettes. The enhancements for the pipet aid minigame include: 1) refinement of serial communications to integrate and use the biohaptic in the game, 2) clear and easy to use tutorial for game (especially for users new to VR), 3) clear feedback on user technique of the pipet aid, and 4) ease of navigation within the game environment. Finally, this team will create a second minigame focused on the use of micropipettes. This second minigame will use a similar workflow to the pipet aid minigame and the team will work to design a bespoke biohaptic prototype for micropipette usage that can integrate in the game.
Minigame content: All minigames will feature the following core laboratory competencies that would benefit exclusively from advanced interactivity and realism: 1) how to accurately use a single-channel set of pipettes and 2) how to accurately use a pipet aid (minigame that has been created).
Length and Interactivity: Minigames should aim to be around a 10-15 min experience. The games should allow users free choice to explore and engage in the technique while providing real-time feedback to correct any errors in user behavior. They should be adaptable for future use with biohaptic feedback technology to provide a ‘real world’ digital training experience. A prototype biohaptic pipet aid has been created and is available to iterate upon and improve.
Cohesion: The set of minigames should connect to themes and design represented in the virtual browser-based simulation previously developed. Therefore, the visual design of the minigames should closely match the real-world laboratory environment.
Students working on this project do not need to have the content knowledge of biotechnology or biotechnology laboratory skills. However, a basic interest in the biological sciences and/or biotechnology is preferred. This project will be a virtual reality extension of a browser based interactive simulation written in 3JS within a GitHub repository. Development of the minigames should be built in Unity. Games should be designed to be run on relatively low-end computer systems and guided by accessibility. Proper licensing permissions are required if art and/or other assets are used in game development.
| 2026 | Spring | ||
| 2025 | Spring | Fall | |
| 2024 | Spring | Fall | |
| 2023 | Spring | Fall | |
| 2022 | Spring | Fall | |
| 2021 | Spring | Fall | |
| 2020 | Spring | Fall | |
| 2019 | Spring | Fall | |
| 2018 | Spring | Fall | |
| 2017 | Spring | Fall | |
| 2016 | Spring | Fall | |
| 2015 | Spring | Fall | |
| 2014 | Spring | Fall | |
| 2013 | Spring | Fall | |
| 2012 | Spring | Fall | |
| 2011 | Spring | Fall | |
| 2010 | Spring | Fall | |
| 2009 | Spring | Fall | |
| 2008 | Spring | Fall | |
| 2007 | Spring | Fall | Summer |
| 2006 | Spring | Fall | |
| 2005 | Spring | Fall | |
| 2004 | Spring | Fall | Summer |
| 2003 | Spring | Fall | |
| 2002 | Spring | Fall | |
| 2001 | Spring | Fall |