Click on a project to read its description.
Bandwidth is a software company focused on communications. Bandwidth’s platform is behind many of the communications you interact with every day. Calling mom on the way into work? Hopping on a conference call with your team from the beach? Booking a hair appointment via text? Our APIs, built on top of our nationwide network, make it easy for our innovative customers to serve up the technology that powers your life.
Currently, we do not have a great way to detect whether a call that has been picked up by a recipient is from an answering machine (for example, the default Apple Voicemail message) or a real human. Our customers would want to take different actions if the receiver is a human or a machine. Providing this service enables a variety of different automations, such as leaving messages if it’s a machine or spinning up AI agents to have real-time interactions with humans. This was a difficult problem earlier because of the variety of messages that can be placed on an answering machine. However, with the advent of new AI models that specialize in voice recognition, this should be a problem that can be tackled appropriately.
With a stream of calls that hit our Programmable Voice numbers, a solution would be to capture this real-time stream and classify it between human and machine. Then, depending on the recipient, we could have separate AI agents that have their own prompts on how and what to respond with. The human agent will be more intensive, enabling a complete conversation with the end human. The machine agent will wait for the necessary cues and deliver whatever static message is necessary.
A potential solution could use Amazon’s Nova Sonic model to facilitate the Conversational AI aspect after verifying what kind of user the recipient is.
The application will be hosted in AWS, and students will receive an AWS account with corresponding credentials. We suggest utilizing technologies such as Node.js or Python for the backend; however, students are ultimately free to select the technologies with which they are most proficient. Audio recordings will be supplied to students for model testing and validation purposes.
The Benjamin Franklin Scholars Program at N.C. State allows students to simultaneously pursue bachelor’s degrees in both engineering and the humanities/social sciences. By combining a degree in engineering with one in the liberal arts, the program provides students with a broad perspective which better equips them to solve the complex problems of today and the future.
Franklin scholars can combine any degree in the College of Engineering with any degree in the College of Humanities and Social Sciences (plus economics). In addition to their course work for those two degrees, students in the program take two courses specifically designed for Franklin Scholars: STS302H (Science, Technology and Human Values) and E497 (The Franklin Scholars Capstone). Students typically complete the program in 4 to 5 years.
The primary purpose of the program is to develop future leaders and technical professionals whose range of skills and perspectives will match the kinds of interdisciplinary challenges the world increasingly faces. The program fosters a strong community among both its students and alumni that emphasizes intellectual curiosity, open-mindedness, breadth of education, and diversity of interests.
In its 33 years the Franklin Scholars Program at N.C. State has produced nearly 250 graduates who have used their engineering and liberal arts training in a variety of settings and professions, including industry, academia, and government. Their careers run a broad gamut, including as engineers, lawyers, physicians, policy analysts, and entrepreneurs, to name a few.
In 2024, the Franklin Scholars Program at N.C. State formed an alumni association (BFSAA) whose mission is:
to strengthen and enhance the Benjamin Franklin Scholars (BFS) Program by further connecting students and alumni in order to better exchange knowledge and experience while deepening the sense of a larger, tighter community.
Its goals are to:
Specifically, key objectives of the BFSAA are to better connect alumni and students through an enhanced alumni mentoring and advising program, alumni speaker programs, periodic student/alumni events, and a robust alumni directory/database which can be accessed via a BFSAA webpage portal.
In the fall of 2024 the initial version of the portal/database was developed by a team of students in the CSC Senior Design Project course under the leadership and mentorship of Mr. Ballou, who created the functional specifications. That portal was implemented in January, 2025 and has been successfully running since then. Although the portal and database serve most of the needs of the BFSAA, a number of changes/enhancements have since been identified that would not only improve the portal’s usability but provide additional functionality.
The problem this project will address is the development and implementation of those changes to create a Version 2.0. Version 2.0 will serve to further advance the above goals, as it will allow for improved communication and connection between students and alumni alike. Mr. Ballou, who has extensive knowledge of the current version, would serve again as the project liaison and mentor to the student team.
Version 1.0 of the BFSAA webpage allows secure access to BFS alumni information, including not only contact and relevant personal and professional information but searchable fields on degrees, willingness to mentor and advise, willingness to participate in BFS-sponsored events, previous attendance at such events, and donations to the program. Each alumnus has a user ID and password by which they can update their own information and view and search selected fields of other alumni.
Current students can use the webpage and database to search for alumni whose backgrounds best fit their needs for an ongoing mentor or advisor who might be of assistance in career planning. The BFS Program Director uses the application to track alumni donations and participation in various program social events. Alumni can better connect with one another for ongoing social and professional purposes. Certain fields in the database are open to view by all users while others are only viewable by each individual alumnus and the Program Director. Hence, the application provides various levels of security.
The webpage also includes a news page which provides a means of communicating to all alumni periodic news on the BFS program as well as on the BFS Alumni Association itself.
For more information on the current database, the webpage capabilities, and technical and implementation details, the functional and detailed specifications as well as all system and user documentation for Version 1.0 can be consulted. Those documents were produced by the BFSAA to provide original design requirements and by the CSC Senior Design Project team as part of the course requirements.
Enhancements for Version 2.0 fall into three categories:
For the most part, the technologies and applications utilized to develop the initial webpage and database should be utilized in developing the enhancements and additional functions. And as in the original application, the additional functions should be accessible either by personal computer or mobile phone. There are no known licensing constraints or legal or IP issues.
Existing technologies/languages/frameworks: Django, PostgreSQL, MaterialUI
My Digital Hand (MDH) is being developed as part of an NSF-funded research grant titled “Characterizing and Empowering Student Success when Traversing the Academic Help Landscape”. Dr. Battestilli and Dr. Heckman are the Principal Investigators on the grant.
My Digital Hand (MDH) is an office hour queuing system that was developed to support student help-seeking and help-seeking research. MDH was originally developed by colleagues at the University of North Carolina – Chapel Hill and was used by multiple institutions. Unfortunately, the MDH system went down in December 2024. The original developer graduated from UNC-CH, so we decided to rebuild the system, better and with more features, at NC State!
There has already been significant work on the MDH system. The main office hours interaction and first-come-first-serve queuing structure has been implemented. Additionally, administrative features, including basic analytics, roster management, course settings, and course creation are implemented. There are still several areas for enhancement: different queuing options, student wait information, course copy/import features, and bug fixes.
The system is currently deployed and has already served 11 courses and over 500 students. We will be expanding the deployment to Duke University.
The MDH project involves a preformed team. The project will continue to add new features to the system while fixing mistakes. There are four main feature areas:
Queuing & Wait Information. The current system adds students to the queue in a first-come-first-served fashion, but there are times when course teaching staff want to help students in a different order. Student request tickets are currently annotated with information about their arrival time and the number of completed and canceled interactions. The queue should be updated with information about the most recent completed interaction time. This matters because different prioritizations can help with throughput. On days of deadlines, the queues in some classes can get long, so prioritizing students who haven’t been seen yet would maximize seeing everyone at least once. We would like to provide at least two different queuing options, beyond first-come-first-serve, that can be selected by the instructor and then applied to the queue ordering. One would be least-recently seen. Another could be prioritizing by order of severity (e.g., a IDE issue that stops work would be more severe than a single test failure). Additionally, we would like to provide students an estimate of when they will likely be seen in the queue. Currently, we show the number of students in the queue, but we intentionally don’t show the student’s stop in the queue. A time estimate when they sign in may help students determine if they have the time to wait.
Instructor Analytics. Instructors can currently see information about recent interactions, the percentage of completed and canceled interactions, the percentage of interactions per day of the week, and a summary of teaching staff interactions. The teaching staff summary information has a few bugs that need to be fixed around ordering. Additional analytics would be provide more insight for instructors. For example, it would be helpful to see a list interactions for each student so we can identify students who have lots of interaction requests or a high number of canceled requests. Additional analytics will be negotiated as part of the project scope and from feedback from current users at NC State and Duke University.
Course Copy & Settings: MDH course setting typically don’t change much between semesters. A course copy feature would help with coping in pre- and post-interaction questions. Additionally, the pre- and post-interaction questions user interface could be enhanced to support reordering.
Data Export: The interaction data collected in MDH is used for computing education research. Therefore, a robust export of data is required. The current export will be reviewed by computing education researchers to ensure the appropriate information is provided in a format that is helpful for data wrangling and to answer research questions.
Feature updates will be regularly deployed to the production MDH server. User experience studies would be helpful, particularly on the faculty side (and possibly with colleagues from Duke). This project will be released open source.
Backend: Spring Boot w/ Hibernate ORM and MySQL/Maria DB + Testing with JUnit
Frontend: React + Testing w/ Mocha
Runs on Linux using Docker containers. Shibboleth authentication achieved with reverse proxy container.
The project will be released open source using the MIT license.
Hitachi Energy serves customers in the utility, industry and infrastructure sectors with innovative solutions and services across the value chain. Together with customers and partners, we pioneer technologies and enable the digital transformation required to accelerate the energy transition towards a carbon-neutral future.
Automate the generation of transformer test reports from various input sources, with output formats in Microsoft Excel and Word. The goal is to streamline reporting, reduce manual effort, and ensure consistency and accuracy across documentation.
Focus Areas:
For Current Transformer (CT) Test Reports, the input data will be provided in several XML files. The application will need to parse and extract relevant test data from the XML files and automatically populate the corresponding fields in the Excel and Word report templates.
Key Features:
Deliverables:
Focus Areas:
Key Features:
Deliverables:
The North Carolina Department of Health and Human Services oversees the provision of health and human services to all residents of North Carolina, with a particular focus on vulnerable populations such as children, the elderly, individuals with disabilities, and low-income families. The Department collaborates with healthcare professionals, community leaders, advocacy groups, and various local, state, and federal entities to achieve this objective.
The IT Division collaborates with various agency divisions on projects with varying timelines, requirements, and funding. This results in the use of multiple systems for managing data, budgets, invoicing, and funding, affecting the timeliness, consistency, and accuracy of crucial information. Such a fragmented approach complicates planning and decision-making.
The absence of a centralized view hinders timely, data-driven decisions and causes inefficiencies in programs and initiatives.
The project aims to enhance IT operations by creating a centralized hub for data, contracts, and financial information. This will improve visibility, streamline decision-making, and strengthen financial accountability with custom dashboards and robust reporting tools. The core is the data hub, with other reporting elements as extensions.
Characteristics |
Description |
Centralized Data Hub/Repository |
Build a database for regular uploads and automated data ingestion from multiple systems |
Tailored Dashboards |
Develop custom dashboards for different management roles with relevant, up-to-date information. |
Flexible and Scalable Architecture |
Design the system to easily incorporate new data sources and systems. |
Powerful Reporting |
Create custom reports that aggregate data across projects, funding sources, and timeframes |
Improved Decision-Making |
Consolidate and visualize data to support faster, more strategic decisions. |
We welcome proposals on technologies, platforms, designs, and solutions.
SAS provides technology that is used around the world to transform data into intelligence. A key component of SAS technology is providing access to good, clean, curated data. The SAS Data Management business unit is responsible for helping users create standard, repeatable methods for integrating, improving, and enriching data. This project is being sponsored by the SAS Data Management business unit to help users better leverage their data assets.
An increasingly prevalent and accelerating problem for businesses deals with the vast amount of information they are collecting and generating. Combined with lacking data governance, enterprises are faced with conflicting use of domain-specific terminology, varying levels of data quality/trustworthiness, and fragmented access. The result is a struggle to timely and accurately answer domain-specific business problems and potentially a situation where a business is put at regulatory risk.
The solution is either building, or buying, a data governance solution to allow you to holistically identify and govern an enterprise's data assets. At SAS we've developed a data catalog product which enables customers to inventory the assets within their SAS Viya ecosystem. The product also allows users to discover various assets, explore their metadata, and visualize how the assets are used throughout the platform. In short, our data catalog is built to assist users in the discovery and exploration of their data assets, but there comes a point where users are ready to actively make use of their data assets. With this shift, the usage patterns and needs of users change as well. Users have a specific data asset in hand and they're looking to answer specific questions about it, their usage, or both for a business problem.
This reveals some shortcomings in our data catalog. Our catalog excels at letting users dive into the details of the dataset metadata, but this can sometimes be overwhelming or not presented in a way to be easily consumed. This is compounded when users are context-switching between another SAS product and our catalog; the user is looking for some information in our data catalog that they can take and use in their work, but it can be difficult to sift through.
The concept of data cards has developed in recent years in the Machine Learning space as a way to provide standardized documentation for machine learning models and their data assets. Data cards provide concise, structured summaries of the important aspects of a dataset (metadata) needed by users to understand, evaluate, and use a resource effectively. Data cards act as a nutritional label so to speak, but unlike a nutritional label there isn't one single format as to how we display the information as it depends on the business, domain, and user personas.
In addition to developing the data cards, we'd also like to develop them with the goal of embedding them in other products. Users wouldn't have to navigate to and within our data catalog to access these data cards, but they could be added to other applications to readily provide actionable insights about the data assets being used.
As part of this project, you'll create a web application to view data cards.
On startup, the application should ingest/load a pre-defined set of data sets (in a csv format) and their associated metadata (json or csv). An initial set of datasets and their metadata will be provided by the sponsors, but the team can also generate more as needed. It should also be possible to group datasets together into a library.
The application must provide a user interface to view the data cards; of which three different data cards should be available: table/dataset data card, library data card, and column data card (this is a stretch goal).
The application should provide a filter mechanism to select a single table or library out of all those available in the system to navigate to one of the card types.
The table or dataset data card is for a single dataset.
The application should provide an easy-to-understand interface for users to quickly gather the important information about the dataset. This data card should include information like:
The above attributes will be provided by the metadata associated with the dataset. The application should also provide:
Students are encouraged to explore different techniques for providing this information including use GenAI.
The library data card is for a library or schema that contains a collection of tables/datasets. This data card acts much like an aggregate for all its containing tables and provides users with the ability to quickly understand the breadth/scope of the data in the library. This data card should include information like:
The column data is for a single column in a dataset. This data should include information like:
If the team makes use of GenAI in the project codebase, the team should use open-source models. We suggest using a tool like Ollama to easily setup model(s) and for its simple REST API.
The following are out of scope for this project:
The Undergraduate Curriculum Committee (UGCC) reviews courses (both new and modified), curriculum, and curricular policy for the Department of Computer Science.
North Carolina State University policies require specific content for course syllabi to help ensure consistent, clear communication of course information to students. However, creating a course syllabus or revising a course syllabus to meet updated university policies can be tedious, and instructors may often miss small updates of mandatory text that the university may require in a course syllabus. There is additional tediousness in updating a course’s schedule each semester. In addition, the UGCC must review and approve course syllabi as part of the process for course actions and reviewing newly proposed special topics courses. Providing feedback or resources for instructors to guide syllabus updates can be time consuming and repetitive, especially if multiple syllabi require the same feedback and updates to meet university policies.
The UGCC would like a web application to facilitate the creation, revision, and feedback process for course syllabi for computer science courses at NCSU. An existing web application enables access to syllabi for users from different roles, including UGCC members, UGCC Chair, and course instructors (where UGCC members can also be instructors of courses). The UGCC members are able to add/update/reorder/remove required sections for a course syllabus, based on the university checklist for undergraduate course syllabi. Instructors are able to use the application to create a new course syllabus, or revise/create a new version of an existing course syllabus each semester.
We are building on an existing system. The focus this semester will be on adding the idea of schedule functionality to the syllabus tool. Additionally, there are several process improvements that should be made to support future deployment of the system.
New features include:
Process improvements include:
Stretch goal:
SciSummary, part of Ariso Intelligence’s personalized AI product suite, is an AI-powered research assistant that helps students and researchers manage and understand scientific literature through tools like automated paper summarization, reference management, and citation generation. Our mission is to make research more accessible, faster, and more insightful—especially for students, researchers, and professionals in academia and industry.
You’ll get to work directly with two serial entrepreneurs. Max, ex-Googler engineer, founded SciSummary as his second startup, and it was acquired by Ariso after reaching 700,000+ users. Erkang, an NC State Alum, previously founded JupiterOne which reached billion-dollar valuation in 3 years. His latest startup, Ariso, is building personalized intelligence solutions to empower people to achieve their best in work, academia, and life.
You’ll also have the opportunity to work with some of SciSummary’s renowned advisors, including Dr. Ryan Montalvo, postdoc associate of the Yan Lab at Virgina Tech, and Dr. Helen Chen, Senior Vice Provost for Instructional Programs at NC State.
Writing a literature review is one of the most time-consuming and complex parts of academic research. Although SciSummary already offers tools to find, summarize, and analyze papers, these tools are independent of each other and not optimized for guiding a user through the full literature review process. There is a need to create a seamless, guided experience for users to move from paper discovery to identifying research gaps and generating structured summaries.
This project will build a specialized "Literature Review Mode" within SciSummary. It will integrate, refine, and build upon existing features of the system—paper search, gap detection, and summarization—to guide users through the new full literature review workflow. With the new and existing capabilities, users will:
The project involves engineering a streamlined user experience around these capabilities, including new UX and workflow, UI improvements, API orchestration, and custom tuning of the LLM for this use case. This feature will significantly reduce the time and effort needed for academic researchers to construct literature reviews.
Access to SciSummary Github repo will be provided. Licenses and funded API keys to LLMs will also be provided.
Required technologies:
Familiarity with user experience design, and tools such as Figma, would be a huge plus. You will receive guidance from an experienced UX designer.
This will be a web-based application. There are no known legal or licensing constraints, and IP assignment will be to Ariso Intelligence, Inc.
This request is part of an ongoing project within the Civil, Construction, and Environmental Engineering (CCEE) Department, sponsored by the Alaska Department of Transportation and Public Facilities (AKDOT&PF). As the most seismically active state in the United States, Alaska faces unique infrastructure challenges. AKDOT&PF has been supporting NC State research focused on enhancing the seismic safety of bridges for over 20 years.
After a damaging earthquake, it is critical to quickly determine the status of civil infrastructure, helping state agencies make informed decisions, avoid unnecessary risks, and reduce potential losses. Bridges, in particular, play a vital role after an earthquake by serving as lifelines, providing access for emergency vehicles, and helping reconnect isolated communities. However, assessing the condition of dozens or even hundreds of bridges immediately after an earthquake is a challenge, especially in states where bridges are spread across vast and remote areas, as is the case in Alaska.
During the early stages of this research funded by AKDOT&PF, the items from the following flowchart for performing a post-earthquake bridge assessment were developed. Its foundation relies on three main components:
Figure 1. Three main components for the post-earthquake bridge assessment
What’s Missing?
You might be asking: If the code to perform a post-earthquake bridge assessment already exists, what is missing? We need to add the “rapid” into it. This is how the process currently works, without a software tool to automate it:
As you can see, items 3, 4, and 5 make the process highly manual and time-consuming. Running assessments for hundreds of bridges could take dozens of hours, which defeats the purpose of a rapid post-earthquake evaluation. This highlights the need for a software tool that can automate the entire process: running the MATLAB code for all bridges, storing the results, and generating an inspection priority list based on expected damage. Such automation would significantly reduce assessment time, making it possible to quickly evaluate an entire state’s bridge inventory after an earthquake.
The focus of our research project is to develop a rapid and practical method to evaluate the post-earthquake performance of bridges. This project also looks beyond post-earthquake response. The same type of analysis can be used to run scenarios before an earthquake happens to identify vulnerable bridges in advance, improving emergency planning and even informing better design choices.
We are now seeking a team of computer science students to help transform this research into a user-friendly software tool that can support real-time decision-making when it matters most.
By automating the assessment process, this tool will provide transportation agencies and emergency response teams with fast and reliable evaluations of bridge conditions following earthquakes. It will help prioritize inspections, guide resource allocation, and support timely decisions, playing a key role in maintaining safe and functional routes during critical response efforts.
This is how we envision the post-earthquake assessment of a bridge inventory to happen with a software tool, modifying items 3, 4, and 5 to achieve a rapid assessment.
As we can observe, the primary outcome of the tool is to provide an inspection priority list for AKDOT&PF engineers to know which bridges to inspect first and the damage level in each bridge before deploying engineers to perform on-site evaluation.
What we also envision is the generation of an interactive map. The following figure 3 shows an idea for another rapid assessment tool, called ShakeCast, developed by USGS. (https://code.usgs.gov/ghsc/esi/shakecast/shakecast/-/wikis/home; https://usgs.github.io/shakecast/v3_pages.html; ShakeCast Manual).
Figure 3. Map with inspection priority list and facilities with colors identifying the damage level.
We would like for this to be a Progressive Web Application (PWA, to allow for offline functionality) that can be used in both mobile and desktop browsers. The backend should be written in Python. Any libraries and frameworks used should be open-source and approved by AKDOT IT. Students can choose an appropriate relational database (PostgreSQL, MariaDB, MySQL).
Use of Docker for containerization is encouraged.
Deutsche Bank is a truly global universal Bank operating for 150 years on multiple continents across all major business lines and being a cornerstone of the global and local economies. We are leaders and innovators, who are driving the financial industry forward increasingly with help of technology – from cloud to AI.
This project aims to explore and develop clustering and similarity detection techniques to locate collections of similar repositories and smaller units of code. The goal is to identify cases of duplication of effort and surface these needs to central platform teams to improve engineering efficiency.
Students will develop an application that can accept two or more repositories and will analyze them to identify code similarities. We would like to support different code repository platforms, but students can assume all platforms will be based on Git. Different clustering/similarity techniques can be explored, including the use of generative AI/LLMs, perhaps even detecting code that is functionally similar but is written differently.
The application should be easy to use and performant. We expect to have a large number of repositories processed by multiple users. Results should be clearly communicated to users through a clean and intuitive interface. Students will collaborate with stakeholders throughout the entire software development lifecycle to ensure the application meets their needs.
The application will be developed using appropriate technologies and platforms including Generative AI/LLMs.
Dr. DK is an Assistant Professor in the Department of Computer Science at North Carolina State University and the Director of the Generative Intelligent Computing Lab. His research specializes in artificial intelligence, large language models (LLMs), multimodal reasoning, and AI system design. He leads multiple projects that integrate AI with science and engineering applications, including platforms for ocean science, agriculture, and education.
We currently have an AI-powered ocean science platform that provides advanced capabilities for data retrieval, analysis, and reasoning. However, this application is not packaged as a system end users can consume, missing features for user account management, interaction history tracking, and user feedback collection. Without these components:
For a next-generation multimodal LLM-powered science assistant, these capabilities are critical. Managing multimodal interaction data (text, image, table, numeric, video) and systematically collecting user feedback will enable the platform to adapt more effectively to diverse user needs and scientific workflows.
The OceanConnect senior design team will develop a user interaction and engagement tracking module that can be integrated into a simplified version of the existing AI ocean platform that will be provided.
The OceanConnect project builds on the lab’s ongoing development of an LLM-powered AI Science Assistant for ocean data exploration. This assistant can process multimodal inputs such as text, images, tables, numerical datasets, and videos to support oceanographic research and decision-making. The proposed project will focus on designing and implementing a robust user interaction and engagement tracking system to enhance usability, personalization, and continuous improvement of the assistant.
The system will include:
The platform will be delivered as a web application, designed to integrate with the simplified backend of the existing AI science assistant.
Front-End
Back-End
Database
Integration
Front-End Development
Back-End Development
Database Design
Constraints include ensuring data security, user privacy compliance, and scalability to handle growth in user base and data volume.
Katabasis is a non-profit organization that specializes in developing educational software for children ages 8-15. Our mission is to facilitate learning, inspire curiosity, and catalyze growth in every member of our community by building a digital learning ecosystem that adapts to the individual, fosters collaboration, and cultivates a mindset of growth and reflection.
For beginner students in computer science, learning to program can feel like an intimidating and unapproachable challenge. The translation of ideas to code requires a specific framework of thought which may not be immediately intuitive to students and can therefore make it difficult for beginners to jump into solving problems with code. Game the Game will leverage (1) block-based programming and (2) debugging pedagogy to reduce this learning curve, eliminating the barrier of syntax errors and allowing students to learn programming concepts by tinkering with existing code.
We are seeking a group of students to develop Game the Game: a block-based debugging video game, targeted to middle school students, that helps reinforce debugging and programming concepts by allowing the player to change the rules of the game in real time with code.
The game will consist of many puzzle-based levels. Each level will be defined through code. For example: a simple level definition may consist of the following functions, each with their own positional/optional arguments: Start(), End(), Tree(), and Door(). For each level, the player is tasked with examining a level and the code that represents the level to rewrite the “rules” of the level through the built-in code editor. The level and the code will be displayed side-by-side and will dynamically update; that is: when the player edits the code, changes should be reflected in the level immediately.
Each level should represent an interesting challenge, either from a puzzle or a conceptual computer science standpoint. Over the course of the game, we are seeking for players to learn about various beginner-level programming concepts, such as functions, boolean values, loops, if/else statements, and state. These concepts should be reflected in level design and should scaffold up (build off of) to more advanced topics/puzzles.
The touchstones for this project are Baba is You and CodeCombat.
In summary, the game should be developed around the following core feature set:
As mentioned above, students will use Katabasis’ block-based programming library to create the game in Unity. This API will be provided to students at the start of the semester.
Filtration is everywhere. We filter the air we breathe and the water we drink. Filtration can be found in our cars, refrigerators, and even in our showers and coffee machines. Without filtration, there is no baby food, no computers, no medicines, and certainly no clean, safe planet.
Can you imagine a world without filtration? A world where almost nothing is safe? Neither can we.
That's why we at MANN+HUMMEL work every day on filtration solutions for cleaner mobility, cleaner air, cleaner water, and cleaner performance & industry. As one of the world's largest filter manufacturers, we want to understand how this world works — and make it a little better every day. Our goal is to protect people, nature, and machines by using filtration to separate the useful from the harmful.
For more than 80 years, we have stood for leadership in filtration. As a family-owned filter company with German roots, we operate globally at more than 80 locations. For our filtration solutions, we use the combined know-how of our employees, a global research and development network, and the opportunities offered by digitalization. The future is being decided now. That is why today we provide filtration solutions for tomorrow.
In Mann+Hummel’s IoT Lab, we focus on IoT solutions for commercial-grade HVAC systems, paint booths, and other commercial machinery to complement and enhance Mann+Hummel’s core filter business. We have a lot of experiences with small installations (dozens of IoT devices per site), but we would now like to explore solutions of what an installation with thousands or tens-of-thousands of IoT devices would look like, in terms of both a scalable backend and manageable user-interface. We would like to digitally simulate an example of this type of installation to assess the performance implications and to potentially provide business stakeholders with a fully featured demo of the capabilities we can provide.
We envision two “apps” to be built:
Both of these will be written using the Elixir programming language, which allows for highly concurrent and fault-tolerant distributed applications (it runs on the Erlang runtime, which is famous for powering WhatsApp, among other applications). Establishing efficient patterns and architectures is a key goal of this project, specifically using the unique aspects of the Elixir language, e.g. the process architecture (“supervisors” and “Genservers” in Elixir lingo).
Goals for this project are:
The web-interface should be made with Phoenix Liveview, which excels in reactive web-interfaces with two-way communication to a webserver (via websockets). We know this works well in small soft real-time dashboards, but we would like to test its limits in handling a large number of real-time device updates.The implementation should be hostable in the cloud, but also at least easily adapted to on-premise hosting as well.
Preferred paradigm:
Technologies:
The Division of Parks and Recreation (DPR) administers a diverse system of state parks, natural areas, trails, lakes, natural and scenic rivers, and recreation areas. The Division also supports and assists other recreation providers by administering grant programs for park and trail projects, and by offering technical advice for park and trail planning and development. The North Carolina Division of Parks and Recreation exists to inspire all its citizens and visitors through conservation, recreation, and education.
Our existing web applications comprehensively support divisional operations across multiple areas: personnel management, financial transactions, field staff coordination, facilities and equipment tracking, strategic planning, incident response, and natural resource management. Division managers rely on data from these systems for critical reporting and analytical functions.
The Applications team has sponsored numerous SDC projects, giving them extensive experience with the development process and proven expertise in supporting student teams while providing valuable real-world software development experience. With eleven NCSU CSC alumni on staff who have all completed SDC projects themselves, the team brings both industry perspective and firsthand knowledge of the program. The Applications team will oversee the project and work directly with the SDC to ensure project requirements are met and the development process runs smoothly.
The Division of Parks and Recreation (DPR) has a passport program that lets visitors track their adventures in state parks and collect stamps at each location. The passport features a page for every state park, along with pages for four state recreation areas, three state natural areas, and nine state trails. Each page includes a photo of a signature landmark, activities available at the site, contact information, and more.
This initiative encourages park-goers to explore each park and collect every stamp. While physical passports offer an engaging experience for visitors, they present challenges: lost passports mean lost progress, and new passport versions require starting over. A digital passport solves these problems by storing visitors’ progress online, allowing easy access and updates without the risk of losing physical copies or progress when new versions are released. In the Spring 2025 semester, two SDC teams developed an initial version of our Digital Passport application, transitioning from the traditional physical passport to a modern Progressive Web Application (PWA) based system.
While these applications provide certain core functionality, a few operational needs have identified several areas for enhancement. The application would benefit from expanded administrative capabilities, improved analytics, and additional public-facing features. This project will build upon the existing codebase to create a more comprehensive and feature-rich application for both park visitors and administrators.
Students will work with the existing Next.JS codebase to implement certain enhancements we have defined as being part of Phase 1, while maintaining the application’s existing functionality. Additionally, we have defined a set of features as being Phase 2 that we would like to see implemented but are not as critical to getting the application ready for use by the public; with these features, students have freedom to select which order they would like to implement them so that they can choose the ones they find most interesting first.
Offline Accessibility: Being a Progressive Web Application (PWA), the app should be usable offline, enabling users to view their passport information and monitor their progress without requiring a continuous internet connection.
Admin Portal Enhancements: The current state of the application provides a few core admin features but requires some refining and further development of these features. These include CRUD operations on all park info, trails, bucket list items, users (including viewing user progress and merging two users’ progress into one account). Furthermore, we would like to see some visual analytics (charts, graphs, etc.) regarding certain tracked activities such as park visits, stamp collection, access history, etc.
Most park staff will be visiting the application via a desktop. The current state of the application is designed for use on mobile, but we would like to see some breakpoints added to handle visiting the application from a desktop so that administrators viewing it on desktop can access additional admin features such as editing all park information in a table form.
The Digital Passport App offers students a unique opportunity to contribute to a meaningful project that promotes exploration and appreciation of NC State Parks. It also helps reduce costs associated with traditional printing and materials, while allowing students to choose development avenues they find most interesting or fun to implement.
Tools and technologies used are limited to those approved by the NC Department of Information Technology (NC DIT). Student projects must follow state IT policies dictated by NC DIT once deployed by DPR. Additionally, students cannot use technologies under the Affero General Public License (AGPL).
Furthermore, in order for this application to fit well into our current application stack, this project requires the use of Next.JS for all code. Furthermore, this project must meet the Title II accessibility standards, based on WCAG 2.1 AA. Additionally, we will provide a document which will include certain brand guidelines that must be implemented. The current state of the application was not designed for these standards so some updates may be required in the existing code base to meet these standards.
Besides these constraints, we do not have requirements on what additional technologies and libraries students can use.
Hitachi Energy serves customers in the utility, industry and infrastructure sectors with innovative solutions and services across the value chain. Together with customers and partners, we pioneer technologies and enable the digital transformation required to accelerate the energy transition towards a carbon-neutral future.
The interpretation of the readings of water content in the insulating liquid of power transformers is very complex. Often it is more a guessing exercise than an assessment. More than 95% of the water inside a transformer is in the solid insulation. As the temperature changes, the water migrates to the insulating liquid and back to the paper. However, since the system is hardly in equilibrium, the balance of water between the paper and the oil changes dynamically. On top of this, the paper degradation generates water, which also accumulates inside the transformer. Hence, it is very difficult to define if and when there is a breach in the transformer sealing, allowing for the environmental humidity to ingress in the transformer or not.
In Hitachi Energy we have a patented model for estimating the water content inside the coils based on the water content in the oil, through the thermo-chemical digital twin. It is a mathematical model that divides the coils in “discrete elements” and recalculates the transient balance of water between the solid and liquid insulation.
Currently, the program that performs these calculations is written as a series of python scripts. The scope of the current project proposal is to translate this program into C# and integrate into the company code base. Thus, the main challenges in the implementation of this project would include:
Depending on the work speed of the students, the additional scope described below can be added to deliver a more robust solution:
The Laboratory for Analytic Sciences is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information.
Artificial Intelligence models can now perform many complex tasks (e.g. reasoning, comprehension, decision-making, and content generation) which until recent years have only been possible for humans. Like humans though, an AI model generally works best on tasks that it was specifically trained to perform. While general purpose models (often called foundational models, or pretrained models) can have surprisingly strong performance across a range of applications in their domain, they are typically outperformed within any particular subdomain by a model which was specifically trained for that more narrow subdomain. The most common approach to building these more specialized models is to start with a foundational or pretrained model, and then fine-tune it with a dataset in the more narrow subdomain so that the result is specifically trained, and hyper-focused, on that subdomain.
For example, consider the speech-to-text (STT) model Whisper from OpenAI. Out of the box, this model is capable of producing very accurate transcriptions over a wide range of speech audio recordings (i.e those having differing languages, dialects, accents, noise environments, verbiage, etc). Now suppose that a user is only concerned with transcribing speech audio originating from a single environment and a single speaker, e.g. perhaps a recording of a professor’s lectures throughout a semester. This is a far
more narrow subdomain of application. A data scientist could of course apply Whisper and move on to other projects. However, if squeezing out the best accuracy possible is deemed worth the effort, then that data scientist could consider fine-tuning a custom version of Whisper for this particular application.
To fine-tune Whisper, the data scientist would start by considering Whisper to be a pretrained model, i.e. a starting point for the eventual model to be trained. Then the user could gather a relatively small set of labeled data, meaning recordings that are manually transcribed with ground truth transcriptions. In the professor recording example, this might mean going to class for the first week of the semester, recording the audio, and manually transcribing everything that was spoken. With this labeled dataset in hand, the next step would be to fine-tune Whisper. Optimal procedures for fine-tuning an AI model can be a very complex process, and is perhaps both an art and a science, but general procedures are generally available. The result will be a fine-tuned Whisper variant that, in all likelihood, will produce more accurate speech-to-text results, for future recordings of that professor’s class, than the original Whisper model will. Important to note, this fine-tuned model may presumably perform worse than the original Whisper model on most other applications.
The Laboratory for Analytic Sciences (LAS) has been fine-tuning AI models for many years, and expects to continue doing so for many more. So, it would be desirable to make this process as efficient, effective, and user-friendly as possible. Currently, fine-tuning efforts at the LAS are generally done on an individualized basis, using a disorganized bevy of Jupyter Notebooks and data formatting scripts. This introduces unwelcome overhead into the actual process of creating useful models quickly.
QUESTION: Could a web app be created to support model fine-tuning that enhances the efficiency and effectiveness of the process/results? And could this web app be more user-friendly to use than the current process (this part of the question has a low bar for success ;))?
To scope this problem a bit further, the aforementioned Whisper model has been a fruitful pretrained model for LAS purposes, and makes a good base model for research and development of the Senior Design project proposal described herein. At least for starters, the Senior Design Team can consider Whisper as the primary pretrained model to be fine-tuned. Also, a Senior Design Team from a previous term produced a nice web app for a related purpose called data routing (so not for fine-tuning models, but for routing data to previously fine-tuned models). If the team agrees that this would make sense, the LAS would like for the current team to integrate the fine-tuning web app and the previously created data routing web app into the same app. More details on this to be presented at the start of the term.
The LAS would like a Senior Design team to develop a prototype system that enables a knowledgeable user (data scientist) to fine-tune Whisper models through an easy-to-use web application. The fine-tuning methodology employed should be fairly advanced, and automated, exploring a wide range of tuning parameters/hyperparameters, layer freezing options, and even complex strategies like LoRa adapters. Ideally, the user will retain full control over the fine-tuning options to be explored, but after configuring the overarching fine-tuning space to explore, the user will be able to simply “press go” and come back hours or days later to find an accurate, fine-tuned model waiting for them (along with
extensive logs and visuals displaying the various outputs from the intermediate fine-tuning procedures). It is expected that the user will still need to iterate on this process, but will be able to do so in an informed fashion from the logs from previous fine-tuning runs, and be able to do so with a minimal amount of coding/process minutia.
Most likely, the main task of this project will be to develop a browser interface that enables the user to generate a config file which a processing script can use to execute a series of fine-tuning subroutines. The LAS can assist with writing the script(s) which perform atomic fine-tuning jobs if desired.
While not a strict requirement, it is desirable that the end product be merged with the data routing application/capabilities from a previous Senior Design team. Options include that the current team could use the existing web app as a base to start from, or that the current team could incorporate the relevant portions of the previous team’s application into a newly constructed application.
The LAS will provide the team with one or more data set(s) with which to use for development and testing. The LAS will also provide the team with experienced mentors to assist in understanding the various AI aspects of this project, with particular regards to the fine-tuning methodologies to be implemented. However, this is a complex topic so at least half the team should have strong interest in the topic of machine learning/artificial intelligence.
NOTE: Commercial applications for the purpose described above do already exist in some form on the market. If the team decides to take inspiration (or even portions of actual software) from such applications that is fine with the LAS…so long as the constraints below are not violated, nor of course any legal restrictions.
The team will have great freedom to explore, investigate, and design the fine-tuning system described above. However, the methodology employed should not have any restrictions (e.g. no enterprise licenses required). In general, we will need this technology to operate on commodity hardware and software environments, and only make use of technologies with permissive licenses (MIT, Apache 2.0, etc). Beyond these constraints, technology choices will generally be considered design decisions left to the student team. The LAS will provide the student team with access to AWS resources for development, testing and experimentation, including GPU availability for model training.
ALSO NOTE: Public distributions of research performed in conjunction with USG persons or groups are subject to pre-publication review by the USG. In the case of the LAS, typically this review process is performed with great expediency, is transparent to research partners, and is of little to no consequence to the students.
The Laboratory for Analytic Sciences is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information.
Artificial Intelligence (AI) is the technology of our time. It consists of mathematical/computational models which are capable of performing complex tasks such as reasoning, comprehension, decision-making, and content generation, that are typically considered human tasks. The coming years will see AI models integrated into more and more products from our everyday lives, seemingly without exception, until everything from coffee mugs to dog collars are “smart”. Large language models (LLMs) are a critical AI technology that store an immense amount of useful information, and simultaneously enables human <-> AI interaction in a natural manner. LLMs allow users to input “prompts” in a natural form (most commonly text, but also images/video and audio) and receive responses in kind. Much like with human communication however, the words you use to deliver your message carry as much importance as the message itself (see Reference #4 for general prompting guidance).
Prompt engineering can be defined as the process of crafting an LLM instruction that effectively guides the LLM to generating a desired response. Typically, prompt engineering is an iterable process. A user may start with a good first guess, such as “Draft an email to my boss asking for a raise.” After observing the LLM response, the user will notice something undesirable (e.g. the email doesn’t have any real examples justifying the request for a raise). The user will modify the prompt and run it again, attempting to get a result that achieves the objective, such as “Draft an email to my boss asking for a raise. Include the following examples that showcase my exemplary performance: 1) I detected and corrected 87 software bugs in our company’s flagship product last year 2) I led a team of engineers to create a new search feature 3) My innovations won nationally recognized awards.” The prompt engineer will repeat this process of identifying issues with the current LLM response and adjusting the prompt accordingly over and over, perhaps many dozens of times until the final LLM response is as desired.
More recently, “AI Agents” have come onto the scene and promise to deliver AI capabilities with far less onus on the user. A definition (from Google) of an AI Agent is a software system that uses AI to pursue goals and complete tasks on behalf of users. Can an AI Agent be used to perform, or assist with, the task of prompt engineering? Yes indeed. There are multiple published solutions on this topic across a range of use cases and complexity. For one example consider Google’s OPRO (Optimization by Prompting, Reference #1), which uses LLMs to evaluate responses and generate modified prompts in a fairly autonomous fashion based on a user-supplied “meta-prompt” (primarily consisting of a description of the task/goal, and a set of examples and evaluation criteria). Other examples include EvoPrompt (Reference #2), OpenAI’s meta prompts (Reference #3), and Prompt Tuner.
An important consideration of prompt engineering is whether the end goal is somewhat open-ended (such as the email example above) or if there is a definitive, correct response to be found (e.g. What is the probability that the sum of two rolled dice is 2?). In the latter case, if the user has examples, with correct answers, that require the same reasoning as the query (or queries) in question, then prompt optimization methods (such as OPRO) have been shown to be very effective. In other cases where no examples are available (often referred to as zero-shot), a user may still benefit from prompt engineering though it may require the user to provide feedback manually. In any case, an application incorporating modern prompting assistance is of value to our users.
Some prompt optimization strategies rely on automated generation, and execution, of prompts. This is problematic from a policy and compliance perspective, where a user may be restricted from performing certain classes of LLM queries (e.g. prompts requesting or including sensitive information, prompts promoting harmful content, prompts exploiting vulnerabilities, or prompts with intellectual property risks). For this reason, there is an additional feature that is important to the LAS, which is that all LLM queries performed are validated by the user before execution. For a prompt engineering app to be useful to the LAS, this is a feature that must be incorporated.
The LAS would like a Senior Design team to develop a Prompt Engineering prototype that incorporates modern prompting assistance methods into a user-friendly web application, preferably one which could be run (or easily modified to run) client-side, perhaps via a browser extension. The overarching purpose of the project is to enable users who are perhaps only mildly familiar with LLMs, and technology in general, to craft highly effective prompts. Regarding the users’ goals, the prototype should assist users regardless of whether their goal is open-ended or well-defined, and whether their prompt will need to be zero-shot, few-shot, etc. An AI Agent system, such as OPRO or similar, will need to be incorporated/developed to support this. The prototype will need to enable the user to upload a dataset file (json/csv/tsv) of examples when available. Depending on the method taken, it may also be necessary to allow the user to define an evaluation method for responses though this can be discussed and decided during the semester (TBD). The prototype interface should display responses, allowing the user to inspect them and to provide feedback to the prompt engineering engine to enable further adjustments. Prior to any prompt execution, including those that may be part of an internal automated prompt exploration strategy, the user must be able to validate that the prompt does not violate any organizational policies.
The LAS will provide the team with one or more data set(s) with which to use for development and testing, an AWS infrastructure environment in which to work, and access tokens for utilizing modern LLMs from companies such as OpenAI.
Note that commercial applications for the purpose described above do already exist on the market. If the team decides to take inspiration (or even portions of actual software) from such applications that is fine with the LAS…so long as the constraints below are not violated, nor of course any legal restrictions.
The team will have great freedom to explore, investigate, and design the prompt engineering system described above. However, the methodology and technology (or technologies) employed should not have any restrictions. We can only use technologies that have permissive licenses (e.g. MIT, Apache 2.0, etc). Also, we will need this technology to operate on commodity hardware and software environments. Beyond these constraints, technology choices will generally be considered design decisions left to the student team.
ALSO NOTE: Public distributions of research performed in conjunction with USG persons or groups are subject to pre-publication review by the USG. In the case of the LAS, typically this review process is performed with great expediency, is transparent to research partners, and is of little to no consequence to the students.
The Division of Parks and Recreation (DPR) administers a diverse system of state parks, natural areas, trails, lakes, natural and scenic rivers, and recreation areas. The Division also supports and assists other recreation providers by administering grant programs for park and trail projects, and by offering technical advice for park and trail planning and development. The North Carolina Division of Parks and Recreation exists to inspire all its citizens and visitors through conservation, recreation, and education.
Our existing web applications comprehensively support divisional operations across multiple areas: personnel management, financial transactions, field staff coordination, facilities and equipment tracking, strategic planning, incident response, and natural resource management. Division managers rely on data from these systems for critical reporting and analytical functions.
The Applications team has sponsored numerous SDC projects, giving them extensive experience with the development process and proven expertise in supporting student teams while providing valuable real-world software development experience. With eleven NCSU CSC alumni on staff who have all completed SDC projects themselves, the team brings both industry perspective and firsthand knowledge of the program. The Applications team will oversee the project and work directly with the SDC to ensure project requirements are met and the development process runs smoothly.
The Division of Parks and Recreation (DPR) has a passport program that lets visitors track their adventures in state parks and collect stamps at each location. The passport features a page for every state park, along with pages for four state recreation areas, three state natural areas, and nine state trails. Each page includes a photo of a signature landmark, activities available at the site, contact information, and more.
This initiative encourages park-goers to explore each park and collect every stamp. While physical passports offer an engaging experience for visitors, they present challenges: lost passports mean lost progress, and new passport versions require starting over. A digital passport solves these problems by storing visitors’ progress online, allowing easy access and updates without the risk of losing physical copies or progress when new versions are released. In the Spring 2025 semester, two SDC teams developed an initial version of our Digital Passport application, transitioning from the traditional physical passport to a modern Progressive Web Application (PWA) based system.
While these applications provide certain core functionality, a few operational needs have identified several areas for enhancement. The application would benefit from expanded administrative capabilities, improved analytics, and additional public-facing features. This project will build upon the existing codebase to create a more comprehensive and feature-rich application for both park visitors and administrators.
Students will work with the existing Next.JS codebase to implement certain enhancements we have defined as being part of Phase 1, while maintaining the application’s existing functionality. Additionally, we have defined a set of features as being Phase 2 that we would like to see implemented but are not as critical to getting the application ready for use by the public; with these features, students have freedom to select which order they would like to implement them so that they can choose the ones they find most interesting first.
Offline Accessibility: Being a Progressive Web Application (PWA), the app should be usable offline, enabling users to view their passport information and monitor their progress without requiring a continuous internet connection.
Admin Portal Enhancements: The current state of the application provides a few core admin features but requires some refining and further development of these features. These include CRUD operations on all park info, trails, bucket list items, users (including viewing user progress and merging two users’ progress into one account). Furthermore, we would like to see some visual analytics (charts, graphs, etc.) regarding certain tracked activities such as park visits, stamp collection, access history, etc.
Most park staff will be visiting the application via a desktop. The current state of the application is designed for use on mobile, but we would like to see some breakpoints added to handle visiting the application from a desktop so that administrators viewing it on desktop can access additional admin features such as editing all park information in a table form.
The Digital Passport App offers students a unique opportunity to contribute to a meaningful project that promotes exploration and appreciation of NC State Parks. It also helps reduce costs associated with traditional printing and materials, while allowing students to choose development avenues they find most interesting or fun to implement.
Tools and technologies used are limited to those approved by the NC Department of Information Technology (NC DIT). Student projects must follow state IT policies dictated by NC DIT once deployed by DPR. Additionally, students cannot use technologies under the Affero General Public License (AGPL).
Furthermore, in order for this application to fit well into our current application stack, this project requires the use of Next.JS for all code. Furthermore, this project must meet the Title II accessibility standards, based on WCAG 2.1 AA. Additionally, we will provide a document which will include certain brand guidelines that must be implemented. The current state of the application was not designed for these standards so some updates may be required in the existing code base to meet these standards.
Besides these constraints, we do not have requirements on what additional technologies and libraries students can use.
OpenDI's mission is to empower you to make informed choices in a world that is increasingly volatile, uncertain, complex, and ambiguous. OpenDI.org is an integrated ecosystem that creates standards for Decision Intelligence. We curate a source of truth for how Decision Intelligence software systems interact, thereby allowing small and large participants alike to provide parts of an overall solution. By uniting decision makers, architects, asset managers, simulation managers, administrators, engineers, and researchers around a common framework, connecting technology to actions that lead to outcomes, we are paving the way for diverse contributors to solve local and global challenges, and to lower barriers to entry for all Decision Intelligence stakeholders.
OpenDI’s opensource initiative is producing the industry standard architecture for Decision Intelligence tool interoperability, as well as a number of example implementations of OpenDI compliant tools and associated assets.
Decision Intelligence is a human-first approach to deploying technology for enhancing decision making. Anchoring the approach is the Causal Decision Model (CDM), comprising actions, outcomes, intermediates, and externals as well as causal links among them. CDMs are modular and extensible, can be visualized, and can be simulated to provide computational support for human decision makers. The OpenDI reference architecture provides a specification of CDM representation in JSON as well as defines an API for exchanging CDMs. However, there is no existing tool that allows curation, provenance, and sharing of these extensible CDMs. This project will provide OpenDI’s Model Hub, similar to Docker’s Docker Hub, to allow public browsing, searching, and sharing of CDMs.
The best way to think about OpenDI’s Model Hub is by looking at Docker Hub.
Users should be able to:
In spring 2025, an NC State Senior Design team created a proof-of-concept implementation providing some of these features. The existing model hub has the basic functionality for requirements 2, 3, 4, and 6. Students this semester will emphasize a full account integration (requirement 1), model ownership and sharing (requirement 5), and tool creation integration (requirement 7).
This project will require the team to contribute directly to the OpenDI open source assets. OpenDI assets are developed publicly on GitHub, and the result (or process) of this project will be hosted there as well. This means team members will be expected to follow OpenDI community contribution standards and to contribute their work under the license OpenDI selects. Team members are encouraged to use their own GitHub accounts to get credit for their contributions.
The existing Model Hub has a backend written in Go and frontend in React. Students will be extending these implementations, so familiarity with Go and React is encouraged. Development of a CLI tool can be done in a language of their choosing.
Through a collaboration between Poole College of Management and NC State University Libraries, we aim to create an innovative open educational resource. Jenn, a recipient of the Alt-Textbook Grant, has already developed open source textbooks for several entrepreneurship courses on campus. Our upcoming focus now involves creating open source micro-simulations as the next phase of this initiative.
Currently, there is a significant lack of freely accessible simulations that effectively boost student engagement and enrich learning outcomes within educational settings. Many existing simulations are typically bundled with expensive textbooks or necessitate additional purchases. An absence of interactive simulations in an Entrepreneurship course diminishes student engagement, limits practical skill development, and provides a more passive learning experience focused on theory rather than real-world application. This can reduce motivation and readiness for entrepreneurial challenges post-graduation.
Our primary goal is to develop an open source simulation platform that initially supports the MIE 310 Introduction to Entrepreneurship course, but could be later made accessible to all faculty members at NC State and eventually across diverse educational institutions.
The envisioned software tool is a versatile open source tool designed to create visual novel-like mini-simulations with content and questions related to a certain course objective. The intent is to empower educators to be able to create their own simulations on a variety of different topics. Faculty will be able to develop interactive learning modules tailored to their teaching needs. This tool needs to be able to export grades, data, and other relevant information based on the following requirements:
The previous team assigned to the current project developed a full-stack JS web application using the NextJS Framework (15.2) which was coded in Typescript for type consistency throughout the application. Our simulations are saved, loaded, and edited as JSON, so we chose to use MongoDB (8.0) as our database, which runs in a docker container. In addition, we used Mongoose to aid data transfer between our application and MongoDB as well as tRPC for building our api endpoints. Lastly, our authentication uses Google’s Authentication through NextAuth.js, and we run this entire application on a virtual machine / server running Ubuntu 22.04 LTS.
For the front-end styling, we chose to use React with TailwindCSS. All technologies and resources used to develop this project must be open-source, as this project is open-source.
The Christmas Tree Genetic Program (CTG) at NC State’s Whitehill Lab is working on genomic tools to develop elite Fraser fir trees. Graduate students are working on elucidating mechanisms involved in the tree’s abilities to handle disease pressure, pest problems and challenges brought about by climate change. Understanding these mechanisms allow the researchers to develop Christmas trees that are more resilient to biotic and abiotic stressors.
Scientists in the CTG program handle a large number of plant material such as unique individual trees, cones, seeds, embryos, cultures and clones. Currently, all the data is managed using Microsoft Excel, which will quickly become obsolete in the face of a growing amount of plant material information needing to be stored. Plant material tracking is key for data integrity. We need to know what is what, where and when at any point in time. A database will help manage our inventory and prevent data loss and mismanagement. Such a database is referred to as a Laboratory Inventory Management System, or LIMS.
This is the fourth round of development for the ROOTS database, which started as a CSC Senior Design in Spring 2023.
ROOTS is a repository of data related to CTG’s research activities both in the fields and in the laboratory.
The various steps of the protocols used by the research group are represented in the database. Individual plant materials of various stages are saved in the database (trees, cones, seeds, embryos…) along with metadata (origin, transfer date, quantity, location…)
The first round of development, by the Sen Design team in Spring 2023, resulted in a strong emphasis on lineage tracking and nomenclature. The ROOTS DB ensures that the seeds from a tree are connected to the parents and the progeny (“children”). The naming nomenclature contains specific information related to the tree breeding work done by the CTG. The system has three types of users: user, superuser and admin. The user has viewing privileges only. The superuser can add, modify, and discard data in the system, and generate reports of material data based on species, genealogy, and other criteria. The admin has additional permission to add new users, superusers, and admins to the system.
The fourth round of development for ROOTS 4.0 will focus on addressing feedback from the users after testing of ROOTS 3.0. The Christmas Tree Genetics program has two main outstanding requirements:
It will also focus on other features needed in ROOTS such as:
Update suggestions for this team include:
ROOTS is a web application using the following stack:
We are a research lab in Parks, Recreation & Tourism Management focusing on environmental education. One of our current projects is partnering with middle school teachers across the state to develop and deliver a resilience education program titled Ready, Set, Resilience. The curriculum itself is co-designed, and we are now in the process of developing a teacher designed evaluation toolkit to measure impact.
This summer, we have been working with teachers to identify outcomes of Ready, Set, Resilience they see for themselves, their students, and their wider communities. Teachers have designed assessments for some of these outcomes, and our research lab is working on editing and streamlining these measures (e.g., surveys, rubrics, student assignments). We have ideas of how we might digitize these assessments and compile resulting data in a way that is useful for teachers, but need help.
This project is part of a larger community engaged project in partnership with middle school teachers across the state. We have been collaborating on a program titled Ready, Set, Resilience, and this summer, teachers began writing evaluation assessments to measure their learning goals for students. We would love for students to partner with us to design streamlined ways for teachers to collect assessment data for their students and view it easily and dynamically. The types of assessment data are emergent, but include multiple choice questions, gradebook-like data entry of teacher observations, and student work uploads. Ideally, we would use technologies accessible for teachers (e.g., Google Forms, web-based dashboards) and streamline presentations on the user end.
The initial two types of applications we have envisioned are:
We imagine this project will be iterative and collaborative with our team (researchers) and teachers. These measures are very much in development, and we are hoping for students who are interested in co-creating ideas with us so we can learn what is possible. We are working on getting a small group of teachers to join the meetings when they can to provide end-user feedback and design guidelines.
We are flexible in this regard, but prioritize an accessible front-end format for teachers, ideally using software or interfaces that they are familiar with, such as the Google Suite. On the back-end, we are flexible with the type of technologies. The only other constraints would be maintaining anonymized data as we are working with minors, but the research team feels confident that we can ensure confidentiality required by our IRB. In terms of IP, we would like to maintain the ability to continue to iterate on this design, but are happy to credit students involved.
LexisNexis Legal & Professional, an information and analytics company, states its mission as: to advance the rule of law around the world. This involves combining information, analytics, and technology to empower customers to achieve better outcomes, make more informed decisions, and gain crucial insights. They strive to foster a more just world through their people, technology, and expertise for both their customers and the broader communities they serve.
In its continuing mission to support the Rule of Law, LexisNexis has around 3,000 people working on over 200 projects per year developing software for its products.
In a rapidly changing environment, as opportunities arise and priorities shift, the question is often asked (and rarely answered with confidence) “What is the consequence of moving someone from one project to another?”
LexisNexis needs an intuitive tool to manage and track the association of people with projects, to provide the insight and data necessary to support business priority decisions.
LexisNexis is looking for a simple, intuitive application that will allow the management of resource allocation to projects.
The tool will be used by Software Development Leaders to group their people into teams and to associate them to projects for a period of time.
The data collected will support decision making when considering the resourcing of projects when competing priorities need to be considered.
It will also support financial tracking and planning.
The preferred solution would be an application accessible through Microsoft Teams, LexisNexis’ collaboration tool of choice.
LexisNexis is best placed to support development in C#, .Net, Angular and SQL Server, although the team may consider other technologies if appropriate.
The initial source of Project data will be an Excel spreadsheet.
Organizational data is sourced through Active Directory, available through Microsoft Graph API.
The North Carolina Department of Health and Human Services (DHHS), in collaboration with our partners, protects the health and safety of all North Carolinians and provides essential human services. The DHHS Information Technology (IT) Division provides enterprise information technology leadership and solutions to the department and their partners so that they can leverage technology, resulting ultimately in delivery of consistent, cost effective, reliable, accessible, and secure services. DHHS IT Division works with business divisions to help ensure the availability and integrity of automated information systems to meet their business goals.
The Information Technology Division (ITD) Vendor Finance Section is responsible for managing a wide range of Contract-Vendor Management, Financial Operations, and other IT-related tasks and processes. Currently, the division faces several challenges in efficiently creating, managing, and tracking forms and workflows generally. The existing system, which includes some development work using the Smartsheet application, lacks the comprehensive functionality needed to streamline these processes effectively.
Inefficient Form Creation and Completion: The current system does not provide an intuitive and efficient way for IT staff to create and complete forms, leading to delays and errors.
Motivation for the Project: The motivation for this project is to address these challenges by developing a comprehensive VFS Management and Tracking System. This new system will:
What are your initial thoughts about a potential solution that addresses the problem presented above? Briefly describe your envisioned software solution to be developed by the student team. Use cases and examples are useful. Provide insight into how this potential solution will be beneficial to end users.
IT Finance & Operations Request Management System project aims to develop a comprehensive system for the IT Division of the NC Department of Health and Human Services (NC DHHS). This system will facilitate the creation and completion of forms, establish a workflow system with notifications and tracking, and provide robust reporting capabilities.
Objectives:
Although there is currently some development work previously performed by the Agency using the Smartsheet application; students will not be limited to that framework as the basis of a solution. Students are encouraged to think of innovative solutions to create, manage, and track requests and work within the ITD and design accordingly. Tools and technologies used will be limited to those which can be supported internally and approved by DHHS IT for deployment upon project completion. Student projects should follow State IT Guidelines.
Don created ResumeFab to help people create resumes tailored to each specific job listing.
He has been a serial entrepreneur that has grown companies from 0 to hundreds of employees multiple times. He’s looked at thousands of resumes and has hired hundreds of people. He has experienced the frustration of having to translate what is on a resume to see why the applicant thinks they are a fit for the job.
Many resumes are created to present general skills and experiences. This requires job seekers to summarize their education, work history, and accomplishments without knowing which ones might be important to an employer. ResumeFab will retain a specific database of education, projects, employment with title, specific roles, completed tasks, and even hobbies that gets used by AI to create a new resume for each specific job listing that highlights the experience and skills requested in the listing.
Create a web-based UI for desktop that allows for job seekers to enter personal details, education, job experience, skills, and hobbies that are constrained (drop downs would be an example of constraints) enough to make the inputs fairly consistent between seekers into a database. Once a seeker enters all their information into a database, then a separate screen should allow them to paste a job description into a text box that allows AI RAG engine to create a resume formatted in 5 different formats the seeker can choose to “fit” that job.
I’d separately like to create a phone-based app that uses speech (AI bot asks questions by voice and entries are created by seekers’ voice responses) to collect the information listed above. The app UI will need a way to copy and paste job postings to create a resume PDF that can get stored on the phone to be sent to the employer.
The resume that is created will still need to follow normal chronological conventions like listing most recent employment first and only using experiences and skills that are in the database. It will need to use some level of judgement to decide which kind of experiences apply to which skills listed in the job description and focus on presenting that with some detail. That will mean that the database will need to contain some specifics for each job/project/role/achievement that lets the AI present them.
AI with RAG api’s
Waste Reduction and Recycling provides waste management, equipment and outreach services to the NC State University campus. The University Sustainability Office serves as a centralized resource of sustainability information and action for students, faculty and staff. Both departments work toward the university’s sustainability goals, which include diverting 70% of all campus waste from the landfill via recycling, composting or reuse. Information on the amount of campus landfill waste, recycling, composting and reuse is reported annually through the NC State Sustainability Report, to NC Department of Environmental Quality, and to NC State Environmental Health and Safety. Though annual tallying of numbers satisfies compliance requirements, that frequency is not often enough to inform data-driven decision making that could improve waste management practices. Increasing the internal reporting frequency to monthly would be ideal.
The university seeks a capstone team to develop an automated system to improve its waste, recycling, and composting data tracking. Currently, manual entry of scanned tipping tickets limits our ability to make timely, data-driven decisions. We're looking for a team to develop a tool that automatically extracts weight data from printed ticket receipts, stores it in a spreadsheet, and visualizes performance on a public Power BI dashboard. This will provide valuable insights into the university's waste diversion performance on a more frequent basis, enabling the campus to make additional progress toward its 70% waste diversion rate.
Increasing the internal reporting frequency to monthly would be ideal but is currently not done because of the large quantity of waste, varying final disposal destinations for waste streams, and variance in how waste totals are reported by each receiving facility. There are five different places where campus waste could end up: 1) local recycling facility, 2) county landfill, 3) yard waste recycling facility, 4) construction and demolition waste debris facility, 5) campus compost facility. In most cases, NC State refuse trucks collect and transport material to these destinations – often multiple times per week. Most locations provide paper tickets documenting weight upon delivery of materials, while others send monthly invoices. The best practice – both for data frequency and to potentially catch errors – would be to record the daily ticket weights and then ensure those weights match the monthly invoice. We are seeking a team that can help us create a user-friendly way for drivers to record these weights so that data is captured in real-time instead of on an annual basis. If there’s time, developing a dashboard to visualize the data would be helpful.
The goal is to develop an automated system for extracting waste and recycling weight data from print tickets, storing it in a structured format, and visualizing it via an online dashboard, providing insight to operators and the campus community about campus waste. This begins with a process that records the weights from the paper tickets, which are provided after each waste delivery to various vendors. It would be ideal for this process to be user-friendly and something that could be quickly completed by the delivery driver or processed by administrative staff once the driver returns to the office. We’d like that data automatically stored to a Google Sheet, which could be used to create a Power BI dashboard.
We are unable to utilize a webapp due to lack of personnel to maintain the application. Campus is a Google environment, and Google Sheets is the tool of choice for the personnel who complete the annual waste reporting. In lieu of an external database that we may not have the ability to maintain, we request that the solution utilize Google sheets for the storage of data that is obtained from the tickets. We are open to the process that the drivers use to record the data (e.g. taking photo of the ticket, scan to PDF when they get back to the office, enter via online form, etc.). The Facilities Division has Asset Works and access to an Asset Mobile app that’s customizable if that is helpful. Regarding visualization of data, we already have processes within Facilities to connect Google Sheets to Power BI dashboards.
There are no licensing constraints, legal issues or IP issues.
Fidelity Investments is a diversified financial services company that provides investment management, retirement planning, brokerage, and wealth management solutions to millions of individuals and institutions. A key part of delivering these services is Fidelity’s leadership in technology; leveraging advanced platforms, AI, and data analytics to create seamless, secure, and personalized client experiences. This commitment to innovation ensures Fidelity remains at the forefront of digital transformation in financial services
Fidelity Investments lacks a centralized digital platform for employees to discover, engage with, and contribute to external industry events, conferences, and thought leadership opportunities. Currently, there is no streamlined process for employees to identify relevant engagements, submit proposals to present, or share back insights and learnings with internal teams. Creating an internal website will enable greater visibility, participation, and knowledge sharing across the organization, reinforcing Fidelity’s position as a leader in financial services and technology.
To address the lack of a centralized resource for external engagement, we envision a web-based internal portal designed specifically for Fidelity employees. This platform will allow users to:
For example, an employee in the AI research team could find a fintech conference, submit a talk proposal, and later upload a summary of key takeaways for colleagues. This solution will streamline engagement, increase visibility of Fidelity’s thought leadership, and promote cross-team learning and collaboration.
Web based internal application.
Preferred Tech Stack is Angular 17+, Java 21+, PostgreSQL, MongoDB
Impartial is a criminal justice nonprofit. We exist to build meaningful human connections and programs that improve the criminal justice system through personal and community-driven engagement. Impartial believes that one of the ways to do that is by engaging the future justice leaders in games that can help them to better understand what the US justice system, what role they could play in it and most importantly, what the system could be by using gaming to understand possibilities.
Impartial has built six criminal justice video games: Investigation, Grand Jury, Plea Deals, Motions to Dismiss, Jury Selection, and Prosecution. We're now developing two more simultaneously: The Defense/Sentencing and Jury Deliberation. Five games have been developed through the NCSU Capstone project, creating valuable assets we can share across games. For consistency and efficiency, we're using the same characters, names, and scenes throughout the series.
The Defense game can be built very similarly to the existing Prosecution game in that it is a combination of choices by which witness will testify and what they will say and what documents will or will not be entered into evidence. The Defense game picks up where Prosecution ends - the prosecution has rested their case, and now it's the defense's turn. This creates a critical decision point that most people don't understand about criminal law: the defense doesn't have to present anything. The prosecution must prove guilt; the accused doesn't have to prove innocence.
But here's the strategic challenge that makes this game compelling: What is the jury thinking when the prosecution rests? If they already seem convinced of innocence, why risk disrupting that with testimony that could backfire? But if the defense attorney senses the jury is leaning toward conviction, what evidence or witnesses might turn the tide? The game puts players in the defense attorney's shoes, constantly viewing everything through the jury's eyes.
Every decision carries weight because credibility is everything - and anything the defense presents opens the door for prosecutorial cross-examination. Players must navigate their own witnesses, evidence, and strategy while facing the ultimate question: Should the accused testify?
The goal is helping players experience the complex strategic thinking of criminal defense work while learning how to achieve the best possible outcome for their client. Most people never see this side of the justice system, so the game reveals the questions, opportunities, and dilemmas that defense attorneys face every day.
Sentencing represents the final stage for the defendant - the moment they address the judge through their allocution. Players must decide how to approach their statement: What responsibility will they accept for the crime they have been found guilt of? Should they address victims in the courtroom? Meanwhile, the prosecution presents their sentencing recommendation based on sentences given to prior cooperating Government co-defendants, extenuating circumstances, and statutory guidelines. But what would you, as the player, want to see happen at sentencing?
The game introduces an unusual legal scenario that complicates everything. Players learn that after the jury's decision, the defendant successfully moved to overturn the verdict, achieving an acquittal. This is very rare. The Judge disagreed with the jury’s verdict.
However, the government then successfully appealed that acquittal through a higher court and the defendant is now considered re-convicted.
This means the same judge who once acquitted this person must now sentence them as convicted - a situation that creates unique psychological and legal dynamics. How do you think this affects what should happen during sentencing? As players navigate this complex courtroom environment, they must understand the judge's responsibilities while considering what extenuating factors should influence the decision.
The prosecution, defense, and defendant each approach this uncommon situation differently, knowing the focus is on determining appropriate punishment rather than relitigating the case. What factors do you believe should carry the most weight in the judge's decision? Each side advocates within statutory guidelines, but ultimately the judge decides. Players then explore potential appeals or other legal recourse if either side disagrees with the outcome. Throughout this process, the game continuously challenges you to consider: What would you consider a fair outcome in this unusual situation? The experience forces players to grapple with their own sense of justice while navigating the complexities of an extraordinary legal conundrum.
The solution to the “Defense’s” problem is to present the best information to prove their case. You have to select which information to present, how to present it, which witnesses to use or not. You have to be strategic in your questions and answers so that you are believable and that it makes sense with the facts/law in the case. You do not have to be knowledgeable about criminal law to be involved, but you will probably learn a thing or two about it during the project. Ultimately, the solution is that gamers learn what it feels like and what is needed to represent and defend an accused.
The solution to the” Sentencing” problem is to make the guilty party accountable for their crime, but in this case the same judge who found the defendant innocent is in an untenable position of having to punish that defendant. If the judge doesn’t punish the defendant sufficiently, the Government will appeal the sentence. If the judge punishes too much, the judge’s own moral compass will be bothered by knowing that an innocent person is being punished for no reason. Does the judge punish the defendant like others that pled guilty for a similar crime? Does the judge let the defendant” go free” with the absolute least amount of punishment, say probation? Does the judge reconsider the 7 page brief that was written in response to asking for an acquittal and give a sentence closer to what the prosecution originally requested – 8.5 years of incarceration. We should hold people accountable for the crime they committed and no more.
It might be helpful for you to know about Unity and have used Ren’Py. Any other technology that you think would be helpful for the best interest of the game should be considered. We are not experts on limitations or constraints with any technology. We are rule followers.
Impartial is a criminal justice nonprofit. We exist to build meaningful human connections and programs that improve the criminal justice system through personal and community-driven engagement. Impartial believes that one of the ways to do that is by engaging the future justice leaders in games that can help them to better understand what the US justice system, what role they could play in it and most importantly, what the system could be by using gaming to understand possibilities.
Impartial has built six criminal justice video games: Investigation, Grand Jury, Motions to Dismiss, Jury Selection, and Prosecution. Five have been developed through the NCSU Capstone project. We're now creating three more simultaneously: The Defense, Jury Deliberation, and Sentencing. We're leveraging assets from prior games - using the same characters, names, and scenes for consistency and efficiency throughout the series.
The Jury Deliberation game addresses a fundamental challenge in our justice system: there are countless ways a jury can decide a defendant's fate, but how do we ensure justice is actually served?
Players enter the deliberation room with one primary motivation - to increase the probability that justice will be served. But the game also lets you explore what happens when you don't "do what's right." What are the psychological effects on you as a player? What are the far-reaching consequences of a wrong verdict?
The game confronts real deliberation dynamics: Do jurors have all the information they need to make the right decision? Are some jurors rushing to finish because they have other priorities? How do personal biases influence the process? Even the way that the jury votes - anonymously, by a show of hands, etc.. all play a role in the outcome. Players experience these pressures firsthand while grappling with the weight of their decision.
The underlying principle is simple: If we're going to use a jury system to hold people accountable, shouldn't it be one you'd be willing to go through yourself as a defendant? The game challenges players to create the kind of deliberation process they would want if their own freedom was at stake.
After nine days of testimony and evidence, twelve jurors enter the deliberation room to decide a verdict. But here's where theory meets reality: twelve people with different backgrounds, biases, and opinions must sift through all the physical evidence and witness testimony to determine what's true, what's relevant, and how it matches the charges against the defendant.
The dynamics immediately become complex. Do you vote openly so everyone sees your position, or cast anonymous ballots? Does one strong personality dominate the discussion, or does everyone get heard? Are jurors aligned from the start, or are there sharp divisions that will require serious persuasion to resolve?
Then there's the standard itself: "beyond a reasonable doubt." Where exactly is the line between "reasonable doubt" and "beyond" it? How do you get twelve people to agree on something that is completely subjective? And if you can't reach consensus, what happens to the case? Should we even be using this standard, or is there a better way?
This is where everything we say that we believe about justice gets tested. If we truly believe in a jury of our peers and that you're innocent until proven guilty, this room is where those principles either work or fail. The deliberation room is where the ideals of our justice system face the messy reality of human decision-making.
It might be helpful for you to know about Unity and have used Ren’Py. Any other technology that you think would be helpful for the best interest of the game should be considered. We are not experts on limitations or constraints with any technology. We are rule followers.
Dr. Renee Harrington is an Associate Teaching Professor/Assistant Department Head in the Department of Health and Exercise Studies at North Carolina State University. Her teaching and research focus on various facets of health and wellness including nutrition, resilience, stress management, and physical activity promotion.
At NC State University, as with many other college campuses, poor eating habits are prevalent among students, contributing to short-term health issues like fatigue, stress, and lack of focus, as well as long-term risks such as obesity and chronic diseases. Traditional nutrition education methods often fail to engage students or offer practical, day-to-day strategies for healthier eating. Without the necessary tools to make informed choices, students may continue to overlook the importance of balanced nutrition, relying on unhealthy food options that are convenient but nutritionally inadequate or imbalanced. The lack of engaging, interactive resources for learning about healthy eating habits represents a significant gap in addressing the dietary challenges faced by students. To improve students’ awareness of nutrition and empower them to make healthier food choices, there is a need for a more dynamic, personalized, and accessible approach to learning.
This project aims to address these gaps in nutrition education by creating an innovative, game-based platform that teaches students about healthy eating in a fun, interactive, and engaging way. The goal is to design a game that captures students' attention and encourages them to think critically about food choices, while providing practical, real-world nutritional knowledge. The interactive nature of the game will allow students to explore different scenarios, make food-related decisions, and see the consequences of their choices in a risk-free environment. By completing challenges, solving puzzles, and making food-related decisions within the game, players will learn about the nutritional value of different foods and how to incorporate healthy choices into their daily routines. The game will incorporate real-world nutrition choices based on NC State campus dining options and be tailored to the specific needs and preferences of the NC State student body, offering a personalized experience.
The project end product will be implemented in HES 100-level courses as a supplemental learning tool to enhance the curriculum. The game will be integrated into coursework and a specific module linked to education on nutrition, wellness, and healthy lifestyle choices. Students will engage with the game as part of their course requirements, allowing them to apply what they’ve learned in class in a dynamic, interactive format.
This personalized and engaging approach will help foster healthier habits that students can carry with them beyond their time at NC State University, ultimately contributing to their overall health and well-being. The project’s ultimate motivation is to help students at NC State University—and, potentially, students at other institutions—develop a deeper understanding of nutrition and the tools to lead healthier lives through a fun, accessible, and educational experience.
Benefits to End Users:
Students working on this project do not need to have content knowledge of nutrition. There is flexibility with regards to technology selection and paradigm. Ideally the user interface must be easy to use and accessible to a broad diversity of users, feasible within the given time, and if possible lower cost to maintain long-term. The solution ideally includes secure web hosting with built-in support for regular updates and security patches and language about students’ privacy and data protection, etc. It is preferred that the tool be able to export grades, data, and other relevant information based on the following requirements:
The primary goal is to develop an open-source simulation platform that initially supports nutrition curriculum in HES courses but could be later made accessible to all faculty members at NC State and eventually across diverse educational institutions.
Dr. Srougi is an associate professor (NCSU- Biotechnology Program/Dept of Molecular Biomedical Sciences) whose research interests are to enhance STEM laboratory skills training through use of innovative pedagogical strategies. Most recently, she has worked with a team to develop an interactive, immersive and accessible virtual simulation to aid in the development of student competencies in modern molecular biotechnology laboratory techniques.
Biopharmaceutical manufacturing requires specialized expertise, both to design and implement processes that are compliant with good manufacturing practice (GMP). Design and execution of these processes, therefore, requires that the current and future biopharmaceutical workforce understands the fundamentals of both molecular biology and biotechnology. While there is significant value in teaching lab techniques in a hands-on environment, the necessary lab infrastructure is not always available to students. Moreover, it is clear that while online learning works well for conceptual knowledge, there are still challenges on how to best convey traditional ‘hands-on’ skills to a virtual workforce to support current and future biotechnology requirements. The need for highly skilled employees in these areas is only increasing. Therefore, to address current and future needs, we seek to develop virtual reality minigames of key laboratory and biotechnology skills geared towards workforce training for both students and professionals.
The project team has previously created an interactive browser based simulation in a key biotechnology laboratory skill set: sterile cell culture techniques. This learning tool is geared towards university students and professionals. In the proposed project, we intend to develop 3 virtual reality minigames using the Unity game engine to reinforce the fundamental skills required to perform more advanced laboratory procedures that are represented in the simulation. The game interactions occur through the Meta Quest 3 VR system. This project will be a Phase II of a previous senior design project. The foundations for the development of one minigame (i.e. use of a pipet aid, see below) and one prototype biohaptic device was accomplished. This current project proposal will seek to focus on the refinement of that mini game as well as the development of two other minigames. These refinements including enhancing the game environments visuals (i.e. more dynamic lighting and realistic assets), development of serial communications to integrate and use the biohaptic in the game as well as the creation of the other two minigames.
Minigame content: All minigames will feature the following core laboratory competencies that would benefit exclusively from advanced interactivity and realism: 1) how to accurately use a single-channel set of pipettes, 2) how to accurately use a pipet aid (minigame that has been created), and 3) how to accurately load samples into an SDS-PAGE gel.
Length and Interactivity: Minigames should aim to be around a 10-15 min experience. The games should allow users free choice to explore and engage in the technique while providing real-time feedback to correct any errors in user behavior. They should be adaptable for future use with biohaptic feedback technology to provide a ‘real world’ digital training experience. A prototype biohaptic pipet aid has been created and is available to iterate upon and improve.
Cohesion: The set of minigames should connect to themes and design represented in the virtual browser-based simulation previously developed. Therefore, the visual design of the minigames should closely match the real-world laboratory environment.
Students working on this project do not need to have the content knowledge of biotechnology or biotechnology laboratory skills. However, a basic interest in the biological sciences and/or biotechnology is preferred. This project will be a virtual reality extension of a browser based interactive simulation written in 3JS within a GitHub repository. Development of the minigames should be built in Unity. Games should be designed to be run on relatively low-end computer systems. Proper licensing permissions are required if art and/or other assets are used in game development.
Dr. Stallmann is a professor (NCSU-CSC) whose primary research interests include graph algorithms, graph drawing, and algorithm animation. His main contribution to graph algorithm animation has been to make the development of compelling animations accessible to students and researchers. See mfms.wordpress.ncsu.edu for more information about Dr. Stallmann.
Galant (Graph algorithm animation tool) is a general-purpose tool for writing animations of graph algorithms. More than 50 algorithms have been implemented using Galant, both for classroom use and for research.
The primary advantage of Galant is the ease of developing new animations using a language that resembles algorithm pseudocode and includes simple function calls to create animation effects.
There are currently two versions of Galant: (a) a sophisticated, complex Java version that requires git, Apache ant, and runtime access to a Java compiler; (b) a web-based version, galant-js, accessible at https://galant.csc.ncsu.edu/ or via the github repository galant-js (https://github.com/mfms-ncsu/galant-js). The latter was developed by a Spring 2023 Senior Design Team and enhanced by teams in Fall 2023, Spring 2024, Fall 2024, and Spring 2025. It has been used in the classroom (Discrete Math) and several algorithms have been successfully implemented. A major challenge is the implementation of search tree algorithms, which would be useful in a data structures class.
All teams working on the project are expected to produce transparent code and detailed developer documentation. It is essential that the sponsor, Dr. Stallmann, be able to continue development on his own or with the help of other students and future teams. To that end, he expects to be directly involved in the development, actively participating in coding and documentation.
Graph algorithms related to search trees have not been implement in either the Java or the JavaScript version of Galant. These would be very useful in a data structures class. Though many animations of search trees are available on the internet, none have the flexibility available in Galant: ability to customize the animations, add animations of unusual search tree implementations, allow users to provide trees as input, etc. Cytoscape provides a tree layout in the yFiles package. This, combined with an animation API suited for search tree operations (e.g., insertions and deletions of nodes) could facilitate implementations of search tree animations.
Students are required to learn and use JavaScript effectively. The current JavaScript implementation uses React and Cytoscape for user interaction and graph drawing, respectively. An understanding of Cytoscape and its layout options is required to understand how trees can best be displayed without impacting the display options used for existing algorithms.
The tailwind plugin in used for style sheets that determine screen positions, colors, fonts, and other features of buttons and other user interface elements.
2025 | Spring | Fall | |
2024 | Spring | Fall | |
2023 | Spring | Fall | |
2022 | Spring | Fall | |
2021 | Spring | Fall | |
2020 | Spring | Fall | |
2019 | Spring | Fall | |
2018 | Spring | Fall | |
2017 | Spring | Fall | |
2016 | Spring | Fall | |
2015 | Spring | Fall | |
2014 | Spring | Fall | |
2013 | Spring | Fall | |
2012 | Spring | Fall | |
2011 | Spring | Fall | |
2010 | Spring | Fall | |
2009 | Spring | Fall | |
2008 | Spring | Fall | |
2007 | Spring | Fall | Summer |
2006 | Spring | Fall | |
2005 | Spring | Fall | |
2004 | Spring | Fall | Summer |
2003 | Spring | Fall | |
2002 | Spring | Fall | |
2001 | Spring | Fall |