Click on a project to read its description.
BCombs is a Cary-based startup that provides a customer relationship management (CRM) software company for youth-mentoring nonprofits. With this SaaS platform, nonprofits serving disadvantaged families and youth can simplify their operations and better fulfill their mission. Features include aggregating data, automating processes, providing actionable metrics, and more on an annual or monthly subscription basis. In Fall 2022, BCombs was one of 15 startups to receive NC IDEA‘s $10,000 MICRO grant.
Nate has worked in Accounting (public & private), Finance, Global Operations, and process optimization at organizations including KPMG, Lenovo, and the Federal Reserve. In addition, he has 25+ years leading nonprofits youth mentoring organizations.
Natalie has 20+ years of professional experience in Project Management, Human Resources & Accounting in the financial and IT industries, leading and managing complex projects.
Our team has professional and nonprofit experience in Operations, Project Management, and IT. We have developed enterprise-level software and managed global teams. Individually, we all have 20+ years of experience working for some of the largest companies in the world
Our advisers have experience leading nonprofits, building data and analytic programs, growing sales, and have 8+ exits under their belts.
Articles / Press
Nonprofits and other organizations that serve youth operate on passion, so administrative tasks, data collection, and other routine processes go undone or not done well.
b.combs takes the administrative burden and the need for technical knowledge off nonprofits' plates, so they can focus on what matters most, their youth and communities. We do this by leveraging our professional and nonprofit experience to help them…
…all while making it affordable.
b.combs is a plug & play Social CRM platform designed to take the administrative burden off of youth mentoring organizations. Specifically BCombs automates processes, increases engagement, and measures impact to help organizations have a greater impact on the communities they serve.
In short, we help nonprofits and families save time, reduce effort, increase impact & achieve their mission
The project is to build a SaaS CRM system that can be used by both local and National organizations to 1) automate processes, and 2) track, organize and analyze data. We are building Sales Forces, or Oracle’s little, little cousin. The CRM will have multiple modules that nonprofits and other customers can utilize based on their subscription level.
For this project, the goal is to reimagine and rebuild b.combs, using what we’ve learned from our existing minimum viable product and clients. Subgoals might include:
The software should be able to support multiple user types, various types of organizations (national, regional, local), and various organization goals (mentoring, service, etc.)
Key features include:
Stretch features may include:
Technology
Hosting using AWS
Drs. Celso Castro-Bolinaga, Chadi Sayde, and Mahmoud Shehata are part of the Department of Biological and Agricultural Engineering at NC State University. Dr. Castro-Bolinaga is an Associate Professor leading the Environmental Sediment Mechanics Research Group, which conducts studies in the area of environmental hydraulics and sediment transport. Dr. Sayde leads is an Associate Professor leading the Precision Agriculture Research Group, which develops and employs advanced models and sensing systems to quantify water and energy movement across the soil-plant-atmosphere continuum from individual plants, to field and watershed scales. Dr. Shehata is a Postdoctoral Research Scholar currently collaborating with Dr. Castro-Bolinaga and Dr. Sayde in developing the hardware of the sensing system related to this proposal.
Scour refers to the removal of sediment around infrastructure due to the erosive action of flowing water. In rivers, scour around bridge abutments and piers (see Figure 1) remains a major technical, societal, and economical challenge in the US. Approximately 500,000 bridges in the US are built over waterways, out of which more than 20,000 bridges are currently susceptible to overtopping or having their foundations undermined due to scour during extreme storm events. Scour is responsible for nearly 60% of the recorded bridge failures in the past and resulted in an average estimated annual cost of around $30 million in 1997. This is why the concept of “living bridges” where sensors are being installed into new and existing structures to provide real-time feedback on structural integrity is becoming increasingly popular.
Our research groups at BAE developed a novel scour monitoring device to dynamically monitor at sub-centimeter resolution changes in scour depth, water depth, water velocity, and temperature profiles from the water surface to the riverbed. The novel device utilizes the differential thermal responses of the sediment, water, and air media to a heating event to accurately identify the locations of the interfaces between them. The device (figure 2) consists of high-resolution Fiber-optic temperature sensing (FO) cables wrapped around a supporting structure to increase its vertical sensing resolution. A Distributed Temperature Sensing (DTS) laser machine collects the temperature signal along the Fiber-optic cable and reports them in XML files. A heating element that is co-located along the FO cable applies heat at a constant rate along the device. Heat application is turned on and off using a relay controller.
Figure 2: Schematic of the novel device setup and a photo of an actual installation.
A previous Computer Science Senior Design team developed the first version of the software, which offers basic control of the heat pulses, detection, extraction, and analysis of the XML temperature data to compute the location of the sediment-water and water-air interfaces. The objective of this project is to continue developing the software and add more functionality to it. The new version of the software should be able to perform a collection of the following tasks:
The students have extensive freedom to conceptualize the main components of the software and to add or modify components as deemed necessary as long as it performs its main objectives.
The final software should be compatible with different windows (and preferably iOS) operating systems. The existing codebase uses Python and Django.
The Benjamin Franklin Scholars Program at N.C. State allows students to simultaneously pursue bachelor’s degrees in both engineering and the humanities/social sciences. By combining a degree in engineering with one in the liberal arts, the program provides students with a broad perspective which better equips them to solve the complex problems of today and the future.
Franklin scholars can combine any degree in the College of Engineering with any degree in the College of Humanities and Social Sciences (plus economics). In addition to their course work for those two degrees, students in the program take two courses specifically designed for Franklin Scholars: STS302H (Science, Technology and Human Values) and E497 (The Franklin Scholars Capstone). Students typically complete the program in 4 to 5 years.
The primary purpose of the program is to develop future leaders and technical professionals whose range of skills and perspectives will match the kinds of interdisciplinary challenges the world increasingly faces. The program fosters a strong community among both its students and alumni that emphasizes intellectual curiosity, open-mindedness, breadth of education, and diversity of interests.
In its 32 years the Franklin Scholars Program at N.C. State has produced nearly 250 graduates who have used their engineering and liberal arts training in a variety of settings and professions, including industry, academia, and government. Their careers run a broad gamut, including as engineers, lawyers, physicians, policy analysts, and entrepreneurs, to name a few.
The Franklin Scholars Program at N.C. State has recently formed an alumni association (BFSAA) whose mission is:
to strengthen and enhance the Benjamin Franklin Scholars (BFS) Program by further connecting students and alumni in order to better exchange knowledge and experience while deepening the sense of a larger, tighter community.
Its goals are to:
Specifically, key objectives of the BFSAA are to better connect alumni and students through an enhanced alumni mentoring and advising program, alumni speaker programs, periodic student/alumni events, and a robust alumni directory/database which can be accessed via a BFSAA webpage.
The problem this project will address is the development of that webpage and associated database. Both are critical to all of the above goals, as they allow for an effective means of not only communicating information but connecting students and alumni alike.
We envision a webpage that allows secure access to BFS alumni information, including not only contact and relevant personal and professional information but searchable fields on degrees, willingness to mentor and advise, willingness to participate in BFS-sponsored events, previous attendance at such events, and donations to the program. Each alumnus would have a user ID and password by which they could update their own information and view and search selected fields of other alumni.
Current students could use the webpage and database to search for alumni whose backgrounds best fit their needs for an ongoing mentor or advisor who might be of assistance in career planning. The BFS Program Director could use the application to track alumni donations and participation in various program social events. Alumni could better connect with one another for ongoing social and professional purposes. Certain fields in the database would be open to view by all users while others would be viewable only by each individual alumnus and the Program Director. Hence, the application would require various levels of security.
The webpage would also be a means of communicating to all alumni periodic news on the BFS program as well as on the BFS Alumni Association itself. It would also be the mechanism by which the BFSAA conducts board member and officer elections every three years.
For more detail on the envisioned database and webpage capabilities, see the attached Excel spreadsheet (BFS Alumni Association DB Functional Specs V2). It contains a definition of all fields in the database as well as access levels and search capabilities for those fields. It also lays out other required functions of the webpage such as data analysis, association communications, and the elections process.
We are flexible in the technologies and applications utilized to develop the webpage and database. However, the end product should be accessible either by personal computer or mobile phone. There are no known licensing constraints or legal or IP issues.
LexisNexis® InterAction® is a flexible and uniquely designed CRM platform that drives business development, marketing, and increased client satisfaction for legal and professional services firms. InterAction provides features and functionality that dramatically improve the tracking and mapping of the firm’s key relationships – who knows whom, areas of expertise, up-to-date case work and litigation – and makes this information actionable through marketing automation, opportunity management, client meeting and activity management, matter and engagement tracking, referral management, and relationship-based business development.
Effective tools fit with the way you work.
With that in mind, LexisNexis InterAction® has a series of Microsoft Office Add-ins and integrations that allow users to access their customer data from Outlook, Excel & Word.
Rather than using Microsoft Office, however, many smaller legal firms are turning to Google Workspaces to manage their emails, contacts, and calendars.
In previous projects, integrated tools have allowed users to create, add and modify Contact information in InterAction from a sidebar within the Gmail and Calendar applications.
A better option for the end user is to have a tool that passively captures Contact information “behind the scenes” as a background service using the Google Workspaces API to synchronize changes to contact data with InterAction. A service management feature should allow configuration of the service and report on the service.
Along with the data capture service, there is the need for the user to manage the actions of this service, suppressing the collection of data from contacts on the basis of applied rules. Rules may relate to contact metadata, content keywords, an exclusion list, etc.
As an example: Bob receives an email invitation to a meeting from Jane via Gmail. The InterAction/Google Workspace (IA/GW) Connection Service spots the email and springs into action! The service checks to see whether, through its set of rules, this transaction is of interest. If it is, it checks whether Jane is an existing Contact in InterAction (using their email address) and, if not, extracts the contact information in the email (using an existing InterAction API) and submits the details as a new Contact. If Jane is known, it registers the email invitation as an Activity between Bob and Jane in InterAction (also using an existing InterAction API). Periodically (depending upon configuration), the service sends an email to Bob to inform him of the actions taken on his behalf and offering him the opportunity to follow up on individual actions or modify the rules.
We recommend that the team may choose their technology stack from any mix of Javascript, Python, and C#.
Google AppScript may be also required.
It is also recommended that Angular 15+ and D3 be used for any front end and visualizations.
An overview of the InterAction MS Office Add-ins will be given, together with a resource pack for styling.
Credentials and access to a test instance of InterAction will also be provided.
SAS provides technology that is used around the world to transform data into intelligence. A key component of SAS technology is providing access to good, clean, curated data. The SAS Data Management business unit is responsible for helping users create standard, repeatable methods for integrating, improving, and enriching data. This project is being sponsored by the SAS Data Management business unit and the SAS Data Ethics Practice to help users better understand their data and leverage the data fairly and ethically.
A proxy variable is a variable that is not in itself directly relevant, but is correlated (linear or otherwise) to a variable of interest. Proxy variables can also serve in place of an unobservable or immeasurable variable. In cases where we can't measure an outcome directly, a proxy variable can be useful (such as per capita GDP as a proxy for the standard of living), but not being cognizant of proxy variables within one's' data can be troublesome as well. For example, in the training of an AI model we may consciously choose to exclude sensitive variables from our training dataset (such as age, gender, race, and so forth) to attempt to avoid introducing bias into the model. Even though these sensitive variables were excluded from the training of the model, the model may still be biased if proxy variables remain in the dataset and were used as part of the training. Inappropriate proxy variable use can lead to biased models that not only do not perform accurately on real-world data, but can lead to irresponsible and unethical model usage which may disproportionately impact vulnerable populations.
A real-world example of the danger of inappropriate proxy variables biasing a model can be found in an attempt by Amazon to create an experimental AI tool for reviewing job applicant resumes (2014). The model was trained on resumes submitted to the company over a ten-year period (most coming from men). Though gender was not a part of the training data, the model penalized applicants who attended all-women's colleges and resumes that include the word "women's" as in "women's chess club captain" or "women's rugby team".
As part of this project, you will develop a proxy variable exploration tool. This tool should allow for a user to upload any arbitrary dataset (in a CSV format) and perform an analysis to identify any proxy variables for variables identified as "private" or "sensitive" within the dataset and report those proxies back to the user.
The tool should work with any dataset in CSV format.
Some suggested datasets to use:
We would like the team to implement multiple algorithms for analysis of proxy variables.
The team may find that there are pros and cons to each algorithm and each algorithm may perform better/worse across different dimensions such as accuracy, performance, data size, etc. This is not unexpected.
The team may perform any pre-processing, data cleansing, or feature engineering deemed necessary to implement the above algorithms or to improve their performance.
The tool must allow users to select the algorithm to perform the initial analysis of a dataset and perform further analysis with a different algorithm if desired.
The tool should allow users to select a subset of columns to analyze instead of all columns. It would be particularly useful if analysis could subset to only so-called sensitive variables/columns such as gender, race, ethnicity, marital status, or religion.
The tool must display the results of the algorithm identifying the proxy variables (and if relevant for the chosen analysis, the combination of variable/value; i.e. occupation=doctor), the correlated variable (i.e. the variable of interest), and any relevant score. The tool must also report the analysis time.
Any appropriate visualization accompanying the results is welcome, but not required.
The ability to compare results from one algorithm to another could be a stretch goal.
The Town of Cary, located in the heart of North Carolina, is a vibrant and rapidly growing community known for its high quality of life and strong commitment to innovation. With a population exceeding 175,000, Cary is consistently ranked among the best places to live in the United States. The town prides itself on offering top-notch services, a wide range of recreational amenities, and a thriving business environment. Cary's strategic location within the Research Triangle Park region makes it an attractive destination for technology companies and professionals, fostering a dynamic and forward-thinking atmosphere. The town's government is dedicated to maintaining its reputation for excellence through continuous improvement and investment in cutting-edge technologies.
The Town of Cary, NC, is committed to enhancing efficiency, security, and user satisfaction through technological innovation. As part of this initiative, we are partnering with North Carolina State University’s Computer Science Senior Design Center to develop a robust and user-friendly Employee Self-Service Portal. This portal aims to streamline the process of updating employee information and resetting network passwords, ultimately improving data accuracy and user autonomy.
Currently, updating employee information such as location, manager, job title, and phone number is a manual process that often involves multiple steps and interactions with the IT department. This method is time-consuming, prone to errors, and inefficient. Additionally, employees must contact IT support for network password resets, which can cause delays and impact productivity, especially outside of regular business hours.
The Town of Cary, NC, seeks to develop an Employee Self-Service Portal to streamline the management of employee information and network password resets. This project involves designing a web-based interface that allows employees to update details such as location, manager, job title, and phone number, with these updates securely written back to the Town’s Active Directory (AD). The portal will also provide a secure mechanism for employees to reset their network passwords.
The primary objective of this project is to design and implement an Employee Self-Service Portal that allows employees to:
To make this project successful, students will need to leverage several key technologies. For frontend development, HTML, CSS, and JavaScript, along with frameworks like React.js or Angular, will be essential. Backend development will require Node.js with Express.js or Python with Django/Flask, along with RESTful API design for communication between the frontend and backend. SQL-based databases will be necessary for data management, with ORM tools like Sequelize or SQLAlchemy to facilitate database interactions. Integration with Active Directory will require LDAP protocols, using libraries like ldapjs for Node.js or python-ldap for Python. Security will be paramount, with OAuth or JWT for authentication, SSL/TLS for secure communication, and role-based access control (RBAC) to manage permissions.
Cary Technologies:
Cary Platforms:
Constraints:
The Undergraduate Curriculum Committee (UGCC) reviews courses (both new and modified), curriculum, and curricular policy for the Department of Computer Science.
North Carolina State University policies require specific content for course syllabi to help ensure consistent, clear communication of course information to students. However, creating a course syllabus or revising a course syllabus to meet updated university policies can be tedious, and instructors may often miss small updates of mandatory text that the university may require in a course syllabus. In addition, the UGCC must review and approve course syllabi as part of the process for course actions and reviewing newly proposed special topics courses. Providing feedback or resources for instructors to guide syllabus updates can be time consuming and repetitive, especially if multiple syllabi require the same feedback and updates to meet university policies.
The UGCC would like a web application to facilitate the creation, revision, and feedback process for course syllabi for computer science courses at NCSU. An existing web application enables access for users from different roles, including UGCC members, UGCC Chair, and course instructors (where UGCC members can also be instructors of courses). The UGCC members are able to add/update/reorder/remove required sections for a course syllabus, based on the university checklist for undergraduate course syllabi. Instructors are able to use the application to create a new course syllabus, or revise/create a new version of an existing course syllabus each semester.
New features include:
Aspida is a local insure-tech company that is disrupting the industry. We have a first-of-its kind platform that offers unmatched speed and experience. We value thinking differently and working in a fast-paced environment. We are a cloud-focused organization with a modern tech stack.
Providing a dynamic ability for insurance businesses to configure and deploy products would provide a significant speed-to-market advantage that has been elusive in the industry. Launching new products in the insurance industry is an arduous and labor-intensive process that spans multiple systems. This leads to long lead times to capture the data required to launch a product and build the required product specs in an industry data format (ACORD PPFA).
We are proposing a project that would focus on building a web-based UI to enable a user to capture product details, specs and requirements and save them in a centralized database. In addition, the system should allow a user to view and update these product details. Further (if there is time) the system will enable the generation of the product definition in an industry data format (ACORD PPFA) to be consumed by other systems.
The target users of the system would be product analysts that define an insurance product including terms, rates, conditions, marketing name, conditions, etc. Aspida will arrange meetings with the primary users so requirements can be captured.
This is a web-based (but mobile responsive) platform. This solution is built entirely in the cloud (AWS) and we employ cloud-native components (no EC2). This project would enable students to gain experience with a very common and popular tech stack that is in high-demand in today’s job market.
Required Technology / Tech Stack Overview
Flexible on any other technologies as long as it's compatible with the tech stack above.
Data Standards
NC State DELTA, an organization within the Office of the Provost, seeks to foster the integration and support of digital learning in NC State’s academic programs. We are committed to providing innovative and impactful digital learning experiences for our community of instructors and learners, leveraging emerging technologies to craft effective new ways to engage and explore.
While there is a lot of focus on using AI to power various instances of a "chatbot" system, there is still a tendency for AI-driven chat systems to go "off the rails" and provide unexpected results, which are often disruptive, if not exactly contrary, to the intentions behind a chatbot dialogue simulation where accuracy matters. We have developed a "chatbot" prototype that simulates having conversations in various contexts (conducting an interview, talking to experts, leading a counseling session, etc.) using a node-based, and completely human-authored branching dialogue format. While this guarantees that conversations remain on-script, it also means that a human needs to author the content, and this responsibility currently falls to internal developers on the DELTA team, instead of instructors who have more expertise with the content.
This project is a continuation of a previous Senior Design Project from Spring 2024, which resulted in an editor for creating branching dialogue chatbots. We feel like this tool could be a benefit to a large number of faculty at the University and, extending the efforts of the student team in Spring 2024, we would like to expand the capabilities of the editor to support widespread use and/or adoption of this tool.
Provided are some current examples of how the chatbot tool is actively in use at NC State:
DELTA collaborated with the Senior Design Center in the Spring of 2024 to develop a functioning prototype of an authoring application for these node-based conversations. This tool gives users direct control over the conversational experiences they are crafting with the ability to visualize, create, and edit the branching dialogue nodes. This authoring tool does not require users to have any programming experience, as the tool converts these nodes directly into code.
Presently, DELTA is testing this prototype within our internal team to create a new chatbot experience that will be used in a course starting Fall of 2024. The successful incorporation of this editor within the chatbot creation process is exciting, and we hope to expand the existing application based on feedback, documented bugs, and pain-points.
The two main areas of focus for this expansion are the addition of features and UI/UX improvements. Both efforts work in tandem to support an intuitive user experience that will allow creators (instructors) to design a more robust chatbot experience for their end users (students.)
Feature expansions include the ability for the author to preview the conversation from the perspective of the user, a feature necessary to test the functionality and ensure accurate dialogue progression. Another improvement would be the ability to embed metadata to certain node choices. This metadata is a way for instructors to provide qualitative feedback such as points and feedback on the students’ selected choices, and must currently be added manually to the code generated from the editor. With prioritization on these first two features, another potential expansion could be the ability for the author to package and publish chatbot experiences they create.
UX improvements include addressing bugs in the current editor and changes to node behavior that will allow for simplified management of the branching dialogue paths. Previous use-cases of this chatbot include conversations with up to 30 different endings, so further exploration and considerations for how nodes and branches can be created, displayed, organized, connected, and changed is crucial for usability.
Additionally, the current implementation stores chatfiles in the database as JSON objects in a single column. Instead, the representation of this chatfile should be modeled such that the database is format-agnostic to allow changing the format without having to re-encode all chatfiles.
The previous student team developed the current editor using Svelte with SvelteKit; while we imagine continued development in the same environment would be the most efficient path forward, we are still somewhat flexible on the tools and approaches leveraged.
We envision the editor, and chatbot instances themselves, as web applications that can be accessed and easily shared from desktop and mobile devices. The versions of the chatbot currently in use are purely front-end custom HTML, CSS, and JavaScript web applications, which parse dialogue nodes from a "chatfile" which is just a human readable plaintext file with a custom syntax. We want to preserve the node-based structure and current level of granularity of the chat system, but are flexible regarding the specific implementation and any potential improvements to data formats or overall system architecture.
We are the NC State Urban Extension Entomology Research & Training Program, and our mission is to promote the health and wellbeing of residents across North Carolina within and immediately surrounding the built environment. Our primary goals are to partner with the pest management community of NC to 1) address key and ongoing issues in the field through innovative research, 2) develop and disseminate relevant and timely publicly available information on household pests, and 3) to design and offer impactful training programs to pest management professionals.
The pest management industry is often behind the curve in the widespread implementation of technology, which in this modern age greatly limits access to available information. Currently, despite our program frequently developing crucial publications and offering impactful training programs, only a small percentage of our stakeholders utilize these services. This disparity is largely due to the way in which these services are currently presented: hidden within obscure NC State domains or sent via mail as paper bulletins across the State. As a result, countless stakeholders across NC currently miss integral information which could influence pest management programs, directly impacting the health of residents across the State.
Despite the slow uptake of technology within the pest management industry, one piece of technology has become ubiquitous: the smartphone. We want to work with a team of students to leverage this already present direct link to the end user through the development of what we are tentatively calling the “The Wolf-Pest App”. Our vision is the development of an app that connects user directly to information generated from our program, as well as upcoming training offerings, on a “real-time” basis (updates regularly as new information is published, doesn’t necessarily need to be in real-time), where they can be notified of new content via push notifications, but are also able to access information through a simple and efficient UI (ideally with pictures-based links!).
As this is well beyond our typical wheelhouse, we would like to work with the students to identify key pieces of technology they feel are most useful for this project. We would like the final product of the project to be a functioning application available to our stakeholders on the Google Play and/or Apple App stores. We recognize the development and updating of applications is an ongoing process, and given this project is centered around NC State resources the IP would likely remain with NC State University. However, we would want students in this project to attend the annual NC Pest Management Conference in January to announce and demonstrate their design to the industry it will serve (if they have the interest in doing so). We are unaware of any other issues, and are happy to discuss changes in our project ideas as feasibility and timeliness dictates.
Ankit is the Founder & CEO of K2S and an NC State Computer Science Alumna. He envisions a platform which enables other alumni an easy way to give back to the student community by way of mentorship to active Computer Science students. From an industry and a university perspective we’re trying to create a virtual engagement program for the NC State Community.
Successful alumni often revisit the path that got them there, and it invariably leads them down to the roots of their alma mater. In recognition of their supporters and heroes along that path, they have the urge to become one themselves. A portal which allows alumni to easily provide mentorship and their lessons learned not only is fulfilling to the alumni as a way of giving back, it also provides real help and guidance to students stepping out from the shadows of our lovely campus.
WolfConnect got its name when it was kickstarted with a team of Senior Design Students in Fall 2023. We aim to take it to the finish line this semester.
In Summary, this solution will be an online portal which provides benefits to the Alumni as the mentors, the students as the mentee, and the University to foster engagement among its current and past students.
In the Department of Computer Science at NC State, we understand that navigating the program and curriculum can be challenging. Wouldn't it be great to have someone who has been there before support and guide you around the pitfalls, helping you reach your full potential? Much of the skills and knowledge you will experience will take place in classrooms and labs but it also happens in the individual connections made with peers and alumni. CSC's alumni network includes over 10,000 members, many that have been very successful in their careers and who are eager to give back and support current students. We propose creating an online mentorship portal that connects current CSC students with CSC alumni to share a goal of promoting academic success and professional advancement for all.
Primary Portal end-users include: CSC alumni looking to give back to their alma mater by mentoring students and; current CSC UG and GD students looking for help on a specific topic or project. Secondary users could include alumni who are looking for speaking opportunities and; current students searching for contacts for specific internships and Coops.
Examples include George Mason University's "Mason Mentors" Program and; UC-Berkley's Computer Science Mentor Program.
The previous team has a detailed handoff of their deliverables, source code and user guides. The idea would be to use that to build on top of that existing model.
The Division of Parks and Recreation (DPR) administers a diverse system of state parks, natural areas, trails, lakes, natural and scenic rivers, and recreation areas. The Division also supports and assists other recreation providers by administering grant programs for park and trail projects, and by offering technical advice for park and trail planning and development. The North Carolina Division of Parks and Recreation exists to inspire all its citizens and visitors through conservation, recreation, and education.
Our team, the Applications Systems Program, works to support the Division and its sister agencies with web-based applications designed to fulfill their needs and the Division’s mission. The applications address personnel activity, Divisional financial transactions, field staff operations, facilities/equipment/land assets, planning/development, incidents, natural resources, etc. Using data from these web applications, we assist program managers with reporting and analytic needs.
We have sponsored many previous SDC projects, so we understand the process and know how to help you complete this project in an efficient manner while learning about real-world software application development. Our team includes five NCSU CSC alumni, all of whom have completed projects with the SDC. Four of these alumni will be overseeing the project and working directly with you to fulfill your needs and facilitate the development process.
The existing suite of legacy applications is utilizing a LAMP stack (including MariaDB and PHP) and was developed over a period of 25+ years through ad-hoc application development in a production-only environment solely to meet immediate business operation needs. Many of our legacy applications have cluttered, undocumented, and procedural-style codebases. This makes it difficult to understand, maintain, and improve these applications.
Our warehouse receives, stores, and distributes supply items and other bulk goods to nearly 50 state parks and other staffed areas across the division to meet their maintenance and operational needs. To fulfill these duties, the warehouse relies on our current warehouse application, as well as multiple nearly 20-year-old spreadsheets, to track their inventory and manage incoming park orders.
We have begun to create a suite of rewritten applications that use modern technologies, follow best practices, and are well documented. The warehouse application must be rewritten to fit these requirements for inclusion in this modern application suite.
The new warehouse application will aim to improve the workflows of our warehouse staff, superintendents, section chiefs, and park staff in tracking inventory, ordering items, and managing supply needs across the division. The application will be added to our existing repository of modernized applications. The new application shall retain the current functionality of the warehouse legacy application while being redesigned to fit a more modern, object-oriented framework that would allow for a more organized database structure and improve user workflows in the application.
There will be 3 permission levels: warehouse staff, purchasers (superintendents, section chiefs, and their delegated park staff), and park staff. All users will be able to view the current warehouse inventory levels and obtain safety data sheets. Purchasers will be able to create a “cart” (order list) of items required for their park/section and submit it to the warehouse for processing. Warehouse staff will be able to record new stock into their inventory as well as process orders by creating invoices and removing items from their inventory once dispatched. Warehouse staff will also be able to change the permission level of users below them, add/modify inventory item data (including safety data sheets), and update messages on the homepage of the application. Potential stretch goals include adding functionality to record losses and integrating support for barcode scanning.
Tools and technologies used are limited to those approved by the NC Department of Information Technology (NC DIT). Student projects must follow State IT Policies dictated by NC DIT once deployed by DPR.
Our current modern applications utilize currently supported versions of the Docker Engine. Each modern application will be packaged into individual frontend containers using the Node.js 18 base image, React 18, and MUI 5. The backend consists of a MariaDB 10.6 database container and a unified REST API backend container, which is common to all modern applications. This unified REST API container uses PHP 8.1 and Slim Framework 4. Student projects will be expected to roughly use these technologies.
The Ergonomics Center is housed in the Edward P. Fitts Department of Industrial and Systems Engineering at North Carolina State University and provides consulting and training services to clients throughout the U.S. and globally. The Center was founded in 1994 as a partnership between NC State University and the NC Department of Labor. It was created to make workplaces safer, more productive, and more competitive by providing practical, cost-effective ways to reduce or eliminate risk factors associated with musculoskeletal disorders.
Computer work is common in home and work environments. Many employers have also subscribed to hybrid work arrangements, providing little furniture selection and set up guidance to employees. Most computer users do not know how or have the ability to set up their computing environments. By providing resources for education and self-help, users can adjust, choose, or select office equipment and environments that may reduce the physical stresses on their body. Delivering easy to use, clear guidance to computer users is one of the educational threads at the core of the Center’s values.
The Center currently has an Excel-based Self-Assessment tool that was developed for a client. We would like to have a standalone app to provide clients with a resource to use with their office-based employees.
The Center envisions a web-based app that would allow users to perform a self- evaluation of their office workstation with recommendations for areas of improvement. The app should match the functionality of the Excel-based self-assessment tool provided, preserving visual cues, language, and any graphics. If possible, the app should be usable without an internet connection since cellular and Wi-Fi signals can be limited or non-existent in some facilities. A dashboard would show before and after scores or ratings for comparisons of office spaces before and after implementing recommendations. The app should also have the capability to have input variables and output results exported to a report-ready printable format such as PDF, Word or Excel to share if desired. Because analyses may be presented to supervisors, the appearance of the exported information should be professional and require little manipulation or adjustment by the user. In order to control the distribution of users’ information, the app should not store user-provided data in the Cloud; information could be stored locally or be exported in one of the forms mentioned above for individual distribution via other means (e.g., email).
The Center is flexible on the technology used and is willing to proceed with student recommendations. The preferred platform would be a Progressive Web App (PWA) written in React.
This mobile-friendly tool will be provided free of charge on The Ergonomics Center’s website.
Wake Technical Community College (WTCC) is the largest community college in North Carolina, with annual enrollments exceeding 70,000 students. The pre-nursing program in the Life Sciences Department runs a two-course series on Anatomy and Physiology, where this project will be used, with enrollments exceeding 800 annually. Additionally, this project is expected to assist over 1,000 biology students every semester when fully implemented.
Biology students and pre-nursing students need to understand how living cells and the human body carry out and control processes. Proteins are at the heart of every biological process. Proteins fun to speed up chemical reactions, signal for new processes, move ions and molecules in and out of cells. Each specific protein in the body has a particular function and that function depends on its 3D conformation. It makes sense then, that to alter and control the activities within cells, proteins change shape to change function. One important example is hemoglobin. Hemoglobin is a complex protein found inside red blood cells and its primary function is to carry oxygen and carbon dioxide to and from cells of the body, respectively. Chemical groups inside hemoglobin bind to oxygen dynamically at the lungs and then release the oxygen at metabolically active tissues.
As a beginning biology or pre-nursing student this is a difficult process to imagine from a 2D image in the textbook, and we have worked to create a tool that helps visualize protein dynamics using augmented reality. In various iterations the tool has supported the use of AR tags to change the environmental factors that influence protein structure and function, basic animation of structural changes of 3D protein structures, and the creation of structured activities to support educational use—although never all at the same time. Integrating and enabling all of these features, alongside several new ones to make the tool more suitable for deployment in the classroom, is the emphasis of this project. In particular, supporting multiple modes of AR interaction and decentralized collaborative AR experiences for teams of students or students and instructors through the use of real-time video collaboration and recording, and integrating animation features with the use of multiple AR tags will be the main goals. Enabling assignment templates for groups of instructors teaching the same course, a more fully-featured content creation and sharing portal, as well as associated test code will round out the requirements this semester.
The existing version of the AR app has been implemented in React, and allows instructors to upload molecule crystallography files (.cif), define molecule states and environmental factors, and specific the environmental factors that trigger the molecule states. Instructors can additionally create lesson plans comprising questions that students can view and submit for grading. This represents a pretty full-featured experience, although there are a number of “quality of life” issues that remain to be addressed. The aim for this semester will be to address these outstanding quality of life issues for both instructors and students. The main outstanding features and development tasks are:
Bandwidth is a software company focused on communications. Bandwidth’s platform is behind many of the communications you interact with every day. Calling Mom on the way into work? Hopping on a conference call with your team from the beach? Booking a hair appointment via text? Our APIs, built on top of our nationwide network, make it easy for our innovative customers to serve up the technology that powers your life.
Have you ever visited a GitHub repository that didn’t have a README? Have you ever read a README that wasn’t accurate or was missing information? It can be a frustrating experience that keeps you from properly using a new tool or service.
We want to make it easier to write READMEs for new GitHub projects, or better yet, make it easy to edit and update READMEs for existing projects to ensure their accuracy and user friendliness.
README Hero will make it simple for developers to keep their READMEs updated with the most accurate information about what a project does, how to use it, and answer any questions a user might have about it.
We would like to use GenAI to solve this problem.
The idea, generally, would be to use GenAI and Retrieval Augmented Generation (RAG) techniques to analyze a GitHub repository and its contents, including a README, code, other docs, etc., and either suggest a new README or provide updates to an existing README to make the information more accurate and easier to understand.
This may be through a chat-like interface, a WebUI, or some other method you may invent. Regardless, we imagine there may need to be some user interaction (i.e. chat) for GenAI to ask questions or gather more information to better understand the project and make the README more accurate.
It will also be important for the project to ensure the accuracy of the README. One fun idea to do this would be to have GenAI try to run the code and follow instructions from the README.
We want to give you some flexibility in choosing which GenAI stack you think is best.
For the GenAI stack, we would like them to choose from one of these options:
You might also want or need some interaction with a user, perhaps to answer questions about the code or just to better understand the project, so you may also need some technology to provide a chat-like interface. This could be just a simple web app, with something like React and Python.
We’ll want you to use RAG and other techniques to augment the model’s knowledge of the files and source contents of the repository.
Again, we want to leave the technology choices up to you, but we’re happy to help guide and suggest what we think might work best.
You are also welcome to think up a fun name for this as well! We’re just using “README Hero” as a working title.
Dr. Tiffany Barnes and Dr. Veronica Cateté lead computer science education research in the Department of Computer Science at NC State University. Dr. Barnes uses AI and data-driven insights to develop tools that assist learners’ skill and knowledge acquisition. Dr. Cateté works closely with K-12 teachers and students conducting field studies of technology use and computing in the classroom.
Together they have helped hundreds of teachers use the block-based Snap! programming environment and have worked closely to develop solutions for live classrooms, engaging over 800 students and teachers each school year in computing-infused lessons.
Bahare Riahi and Ally Limke are PhD students who have been working closely with Dr. Barnes and Dr. Cateté to understand what teachers’ AI needs are for integrating programming into core subject classrooms, and both are interested in UX and HCI for AI and CS in education.
To help aid the nation’s critical need for a more computer science and AI-literate populace, researchers and educators have been developing interventions for increased exposure and equitable learning experiences in computer science for K-12 students. In addition to the creation of standalone courses like AP CS Principles, Exploring Computer Science and CS Discoveries, we have also been working on developing integrated computer science experiences for common K-12 classes such as English, Math, Science, and Social Studies.
From teacher interviews with STEM and Non-STEM teachers we found there is a need for Integration and Adaptability Classroom Management adding features such as Customization of rubrics and Individualized learning, Enhanced Feedback and Supportive Learning Resources, AI fairness and Making Connections to Students, Authenticity and Creativity Booster in assessment. We found that they need built-in code tracing, document updates, and customized course materials based on individual needs and levels for course development and course expansion parts. Moreover, monitoring features such as customizable tools offering individualized growth monitoring tools, help notifications, pop-up tips, reminders and motivational features are encouraged by teachers.
To support the new influx of teachers and educators from various backgrounds to teach block-based programming lessons, we developed a wrapper for the Snap! language called Snapclass. This system supports assignment creation and student submission as well as project grading. The tool was developed using various programming paradigms and after initial deployment and usability test, we have new feedback to work with to address user needs and tool functionality.
Snapclass, like other software projects, has had a long lifespan, with new features and updates being added over time. Regular bug fixes, updates, and optimizations are necessary to keep the software running smoothly. In order for the Snapclass system to reach a new level of users, the codebase needs to scale accordingly. This means new features, modules and components should be easy to add without compromising the stability or performance of the system. With a well-structured and maintainable code base, we can more easily adapt to changing user requirements and integrate more third-party libraries or frameworks such as LMS support.
Primarily, we would like to design and build a dashboard to help teachers orchestrate computing activities. Teachers do many tasks in their classroom to orchestrate learning. Part of that orchestration includes monitoring student progress and giving students feedback/grades to determine when they need help. We would like to design and build a dashboard that helps them in those processes. AI can be a powerful tool to provide teachers with insights on their individual students and their classes as a whole. This can help them make data-informed interventions and reflect on their teaching. The team working on this project will update Snapclass guided by the following goals:
Dr. Tiffany Barnes and Dr. Veronica Cateté lead computer science education research in the department of computer science at NC State University. Dr. Barnes uses AI and data-driven insights to develop tools that assist learners’ skill and knowledge acquisition. Dr. Cateté works closely with K-12 teachers and students conducting field studies of technology use and computing in the classroom.
Together they have helped hundreds of teachers use the block-based Snap! programming environment and have worked closely to develop solutions for live classrooms, engaging over 800 students and teachers each school year in computing-infused lessons.
Ally Limke is a PhD student who has been working closely with Dr. Barnes and Dr. Cateté to understand what teachers need to lead programming activities in their classrooms. This is Ally Limke’s fifth semester working with Senior Design teams to create SnapClass.
To help aid the nation’s critical need for a more computer science and AI-literate populace, researchers and educators have been developing interventions for increased exposure and equitable learning experiences in computer science for K-12 students. In addition to the creation of standalone courses like AP CS Principles, Exploring Computer Science and CS Discoveries, we have also been working on developing integrated computer science experiences for common K-12 classes such as English, Math, Science, and Social Studies.
To support the new influx of teachers and educators from various backgrounds to teach block-based programming lessons, we developed a wrapper for the Snap! language called Snapclass. This system supports assignment creation and student submission as well as project grading. The tool was developed using various programming paradigms and after initial deployment and usability test, we have new feedback to work with to address user needs and tool functionality.
SnapClass v6.0 will build on the work done by five prior Senior Design teams beginning in Spring 2022. The prior teams have added useful functionality to SnapClass such as the integration of multiple block-based programming environments into the system, a FAQ for students working in Snap, mechanisms for auto-saving code, differentiated assignment to students based on skill level, non-coding assignments, etc.
Snapclass, like other software projects, has had a long lifespan, with new features and updates being added over time. Regular bug fixes, updates, and optimizations are necessary to keep the software running smoothly. In order for the Snapclass system to reach a new level of users, the codebase needs to scale accordingly. This means new features, modules and components should be easy to add without compromising the stability or performance of the system. With a well-structured, commented, and maintainable code base, we can more easily adapt to changing user requirements and integrate more third-party libraries or frameworks such as LMS support.
Prior developers and researchers working on Snapclass have put together a list of functionality that fall short of what K12 educators desire. This semester, for SnapClass v6.0, we would like to work with the team of students on the final 15% of the project; polishing usability, functionality, and improving overall system effectiveness. As the team catalogs the inventory of improvements, we encourage them to research software architecture and database best practices so that they may have the opportunity to refactor different modules of Snapclass. The following are specific goals for the senior design team in Fall 2024:
Dr. Tiffany Barnes and Dr. Veronica Cateté lead computer science education research in the Department of Computer Science at NC State University. Dr. Barnes uses AI and data-driven insights to develop tools that assist learners’ skill and knowledge acquisition. Dr. Cateté works closely with K-12 teachers and students conducting field studies of technology use and computing in the classroom. Dr. Tian is a new research scientist who will start working with Dr. Barnes on September 3 and will join the mentoring team.
Together they have helped hundreds of teachers use the block-based Snap! programming environment and have worked closely to develop solutions for live classrooms, engaging over 800 students and teachers each school year in computing-infused lessons.
To help aid the nation’s critical need for a more computer science literate populace, researchers and educators have been developing interventions for increased exposure and equitable learning experiences in computer science for K-12 students. In addition to the creation of standalone courses like AP CS Principles, Exploring Computer Science and CS Discoveries, we have also been working on developing integrated computer science experiences for common K-12 classes such as English, Math, Science, and Social Studies.
To support the new influx of teachers and educators from various backgrounds to teach block-based programming lessons, we developed a wrapper for the Snap! language called Snapclass. This system supports assignment creation and student submission as well as project grading. The tool was developed using various programming paradigms and after initial deployment and usability testing, we have new feedback to work with to address user needs and tool functionality.
SnapClass v6.0 will build on the work done by five prior Senior Design teams beginning from Spring 2022. The prior teams have added useful functionality to SnapClass such as the integration of multiple block-based programming environments into the system, a FAQ for students working in Snap, mechanisms for auto-saving code, differentiated assignment to students based on skill level, non-coding assignments, etc.
During this semester, we want to integrate a logging system for student interactions in the SnapClass programming environments. This logging system is important for two reasons: 1) Logs can help us provide useful insights to teachers about their students including progress, time on task, and goals and 2) Logs can enable research to be done on student programming behaviors including struggling, tinkering, and understanding. This semester, we want to focus on the following:
Dr. Lavoine is an Assistant Professor in renewable (nano)materials science and engineering. Her research explores the performance of renewable resources (e.g., wood, plants) in the design and development of sustainable alternatives to petroleum-derived products (e.g., food packaging). As part of her educational program, Dr. Lavoine aims to help faculty integrate (elements of) sustainability into their curriculum, to raise students’ awareness of current challenges on sustainable development and equip students with the (proper) tools to become our next leaders in sustainability.
Early June 2024, Dr. Lavoine and three of her colleagues offered a faculty development workshop on sustainability that introduced a new educational platform aiming to guide faculty in the design of in-class activities on sustainability. This platform integrates three known frameworks: (1) the pillars of sustainability (environment, equity, economy), (2) the Cs of entrepreneurial mindset (curiosity, connection, creating value) and (3) the phases of design thinking (empathy, define, ideate, prototype, implement, assess). For ease of visualization and use, this platform was represented as three interactive spinnable circles (one circle per framework) that faculty can spin and align with each other to brainstorm ideas at the intersection of phases from the different frameworks.
As of today, this platform serves purely as a visual tool and includes no other functionality. Dr. Lavoine would like to develop the idea further, and upgrade this platform into an interactive, useful, educational tool that faculty can relate to when designing, or seeking inspiration for, in-class activities on sustainability. Dr. Lavoine would like this platform to grow as a database of educational activities on sustainability that instructors can use in their courses.
If successful, this educational platform will be shared with the entire teaching community. Demands from instructors and students on sustainability are exponentially increasing --- and now, more than ever, it is important to put in place the best practices in teaching and learning sustainability. It is not an easy task, and research in that field has already shown some tensions between frameworks, systems put in place, etc. The idea with this platform is not to tell faculty what sustainability is, but rather to guide them towards creating active learning, engaging activities for their students on the topic, with respect to their (engineering) disciplines. For instance, have you thought about how computer science can help improve sustainability? And meet sustainable development goals?
The users of this educational platform are faculty or instructional designers who are creating educational activities for sustainability. The platform also needs admin users who manage accounts, moderate activities, and oversee comments.
Here are some initial thoughts on the functionality of this educational platform circle.
This project can evolve based on suggestions and creativity from the student team. Dr. Lavoine is open to any ideas to increase user adoption and the usefulness of this educational platform.
I would suggest, at first, to focus on two frameworks (or circles) only: the sustainability one (1) and the entrepreneurial mindset one (2). Dr. Lavoine will provide you with examples of in-class educational activities integrating these two frameworks.
This project is one way to contribute towards sustainability! Creating an open platform on sustainability education, that can grow thanks to users’ inputs, would have a huge impact on the entire higher education community.
This platform should be accessible on the web with support for mobile devices (e.g., touch interfaces).
With this project, there are no licensing constraints, legal issues or IP issues. Ideally, I would love to file a disclosure invention with the students/PI names as inventors, simply to protect this platform in some way. A copyright license may be more appropriate.
Dr. Lavoine is flexible about technology selection – user interface must be easy to use and accessible to a broad diversity of users, hence student inputs are more than welcome; feasibility (what can be done within the given time) is a factor, as well as low cost (as budget is limited, but with students’ help, Dr. Lavoine may be able to ask for more).
NetApp is a cloud-led, data-centric software company dedicated to helping businesses run smoother, smarter and faster. To help our customers and partners achieve their business objectives, we ensure they get the most out of their cloud experiences – whether private, public, or hybrid. We provide the ability to discover, integrate, automate, optimize, protect and secure data and applications. At a technology level, NetApp provides cloud, hybrid and all-flash physical solutions; unique apps and technology platforms for cloud-native apps; and an application-driver infrastructure which allows customers to optimize workloads in the cloud for performance and cost.
Large distributed systems like NetApp’s ONTAP-based AFF storage clusters have a need for high bandwidth, low latency, and extremely high resiliency between nodes. As our customers' storage needs increase with new workloads such as AI, NetApp cluster sizes grow and the storage nodes of the cluster may need to be distributed across a data center. This distribution can motivate an administrator to move away from a factory-engineered central pair of switches towards a shared data center network for the cluster interconnect.
NetApp is partnering with customers to enhance and evolve use cases for our software, services and hardware, particularly for cutting edge AI workloads, lower environmental impacts (power, cooling), and cost optimizations. There is a growing need to support the high-performance and resilient networks natively on customer data center networks, rather than requiring a separate, dedicated network for the NetApp equipment.
However, moving to a customer-designed and customer-controlled network could deactivate many of the guardrails, monitoring mechanisms, and auto-heal features that make ONTAP so reliable. Therefore, a new approach is needed to ensure that network problems can be detected or even predicted as early as possible.
This project aims to design a system that can classify and identify various failures within a black-box multi-hop network that connects a pair of nodes inside a data center. Observations of networking conditions and statistics are limited to what can be obtained by the nodes/hosts themselves, without knowledge of the network topology or administrative access to switches and routers.
A machine learning model (ML) model is expected to be a superior solution for analyzing network conditions compared to hard-coding an algorithm that attempts to anticipate all problems and conditions in diverse networks. However, justifying the superiority of a non-ML algorithm over an ML solution is also an acceptable outcome for the project.
The project can either be implemented using real-time analysis of measurements on the nodes themselves or “off-box” by analyzing archived data periodically (e.g. daily).
The project is divided into the following parts:
The technologies for this project are flexible. Since the project will not be implemented directly in ONTAP, the operating system used by the students for the nodes, switches, routers, or analysis boxes can be any that is convenient for them, as long as it is one that is also accessible to NetApp. Linux is a recommended choice due to its widespread use and compatibility.
The solution should be implemented primarily in user space. However, kernel modifications are acceptable if necessary for the collection of counters on the nodes.
Open source tools and libraries are encouraged to accelerate development progress. Students will use VMs in the NCSU CS department as clients and as emulated network switches/routers. Technologies to review is OpenV Switch and ns3.
Security Visibility & Incident Response (SVIC) provides visibility into security and risk, performs incident prevention and response, and drives root cause analysis to improve the overall security posture at Cisco. To achieve this, our team helps Cisco business entities detect, respond, and mitigate security incidents, improve compliance and team security posture, and ensure Cisco meets regulatory and contractual obligations for data loss notification.
Successfully collecting, ingesting, and reviewing application logs within a service/product delivery environment represents a significant challenge for enterprises, due to differing log formats (CSV, JSON, syslog, text, XML) and the unique log contents and record structures (e.g., the fields and their order within a log record). Further, logs often do not contain necessary information for security investigations. While a Security Information and Event Management (SIEM) can streamline the ingestion process, without context (i.e., what the log data means) or the right information, logs become much less valuable.
With the number of applications and tools generating logs within Cisco, having SVIC personnel become Subject Matter Experts (SMEs) on each log type is not a scalable solution. What is needed is a process for automatically evaluating the logs to a) categorize the logs, b) ensure that the logs contain required information, and c) to extract/glean useful information.
Develop a program, a technology, or a process to automatically evaluate text-based logs (CSV, JSON, syslog, raw text, XML) for completeness and usefulness. Additionally, the program, technology, or process would either automate or assist in developing log context (before or after ingestion into a SIEM). Optimally, the program, technology, or process would result in:
There are multiple avenues that could be pursued toward a solution. One path might be to use existing artificial intelligence or machine learning (AI/ML) systems to evaluate and categorize log data. A second path might be to develop a self-standing/separate large language model (LLM) specific to logging. A third path might be to combine existing extract/transform/load (ETL) tools with a SIEM to develop a solution that automatically builds data dictionaries for the log data.
The most essential functional is described above, the ability to automatically categorize logs into different classes, determine whether they contain particular information and document the important information they contain. As stretch goals, the solution would also:
There are a limited number of constraints to the solution, primarily focused on data security and protecting intellectual property:
The program, technology, or process can either be built from scratch (green field) or can be developed by combining existing programs and/or technologies. The solution can be implemented in the cloud or on enterprise datacenter resources (local networks, servers, operating systems). Using technologies with free or open licensing models is preferred but not required.
Deutsche Bank is a truly global universal Bank operating for 150 years on multiple continents across all major business lines and being a cornerstone of the global and local economies. We are leaders and innovators, who are driving the financial industry forward increasingly with help of technology – from cloud to AI.
Understanding data flow within an application is critical for ensuring business observability and efficient monitoring. Traditional methods of analyzing data flow can be time-consuming and complex, especially for large codebases. With recent advancements in Generative AI and Large Language Models (LLMs), there is an opportunity to automate the generation of UML diagrams that describe data flows within an application. This can greatly enhance our ability to monitor and trace data in real time, identify key observability data points, and improve overall business observability.
This project aims to develop a “Data Flow Analyzer” tool that utilizes LLMs to analyze source code and generate UML diagrams describing the data flows within the application. The generated UML diagrams will highlight key observability data points in log records, databases, messages (Kafka, JMS, MQ), etc. The project will use PlantUML language to write these UMLs, making them easy to visualize and integrate into existing monitoring tools.
We could be technology agnostic, at least to an extent, though our preference would be to use Google Cloud Platform and AI functionality it offers or OpenAI models available through API. However, it’s not disallowed to use open-source models as well if the resulting quality is acceptable. The project will also involve working with PlantUML for diagram generation.
Hitachi Energy Transformers Business Unit specializes in manufacturing and servicing a wide range of transformers. These include power transformers, traction transformers, and insulation components. They also offer digital sensors and comprehensive transformer services. Their products are crucial for the efficient transmission and distribution of electrical energy, supporting the integration of renewable energy sources and ensuring grid stability.
The Hitachi Energy’s North American R&D team for Transformers business unit, located at the USA corporate offices in Raleigh NC, is looking to partner with NCSU to enhance the functionality of a software component delivered within some of our product offerings.
The Web API Link is a Hitachi Energy application that allows the transmission of data between remote end points. For this project, the data transmission will occur between remote monitoring devices and the Hitachi Energy Lumada Asset Performance Management (APM) application. The data transmitted are power transformer condition parameter values, like ambient temperature, oil temperature, gas content in oil, etc., that are tracked or calculated by the remote monitoring devices. The remote monitoring devices are Smart Sensors and/or Data Aggregators attached to the power transformers. They are able to transmit data using a TCP/IP network using Modbus TCP, IEC-61850 or DNP3 protocols.
The Lumada APM is an asset performance management application that is used to assess the condition of power transformer fleets. It can be accessed as a SaaS or On-Premise application, and it has a REST API interface that allows the exchange of data with external applications.
In transmitting data between remote endpoints, the Web API Link interacts with two types of devices.
Figure 1 describes graphically the main components of the application.
Figure 1: Data transmission between Input and Output devices
The architecture of the application is structured in such a way that allows the creation of drivers/plug-ins that can be attached to the application to add additional communication protocols/functionalities support dynamically, without having to modify the core code of the application.
The configuration of the application and the end points between the Input Devices and Output devices is done via JSON configuration files.
Currently the Web API Link application implements the data transmission between Modbus TCP enabled devices and REST API which is the Hitachi Energy’s Cloud application native interface to receive data from the Remote devices. The NCSU Senior Design project will focus on the following enhancements:
Adding the support of these features would allow the Web API Link to expand its connection capabilities, increasing our integration options offering as well as the ability of configuring the application in an easier way.
The initial phase of this project is to add the IEC-61850 MMS communication protocol module/plug-in to the Web API Link application. Other IEC-61850 protocols aside from MMS are out of scope of this project. For this project a Hitachi Energy proprietary library that implements MMS as well as other IEC-61850 communication protocols, will be used.
Referring to the application diagram (Figure 1), the IEC-61850/MMS communication protocol would be implemented as an Input Device driver/plugin that would allow the Web API Link application to communicate and retrieve values from devices enabled with this communication protocol. Once the new features are implemented, a Customer or Field Engineer could create connections between the input devices configured and the Cloud application.
The second phase of this project will be to develop a desktop Configuration User Interface for the Web API Link application.
Currently all the application’s configuration parameters are contained in a several JSON files, grouped as follows:
The Configuration UI, will provide the end users (Customer and/or Field Engineers) with a user friendly interface to configure the application parameters. This will be a very important feature to be added because the current configuration process, editing the JSON files manually, is cumbersome and confusing for the end users.
The main target OS platform for the desktop Configuration UI will be Ubuntu Linux, however a multi-platform application that runs on Windows (10/11) and Linux (20.04 and above) could be developed. No other OS platforms aside from Windows and Linux are necessary.
At the moment the implementation of the plug-in for the DNP3 communication protocol is a stretch goal. If time permits, this would be a great feature to include, but it is not as critical as the other two extensions.
The technologies to be used to create the IEC-650 and DNP3 protocol plug-ins are:
The technologies requirements for the Web API Link Configuration UI are:
The Laboratory for Analytic Sciences is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information.
Intelligence analysts utilize a wide variety of applications, tradecraft, and data to achieve their goals. From an organizational perspective, it is critical to understand how all of this works at a granular level to inform business intelligence decisions. Where are the pain points in workflows that could potentially be improved? Which applications, or features within, are significant boosters to productivity, and which are bottlenecks? How much of an impact would entail from improving the volume and veracity of data available? Developing AI enhancements are very expensive, so exactly what enhancements are important to focus on? These and a hundred other questions are persistently on the minds of organizational leaders with the goal of increasing the productivity and efficiency of their workforce.
To enable data-driven business intelligence decision-making, as well as the development of new AI/ML capabilities, we obviously need data. “Instrumentation” is the process of adding code to an application to monitor its, and its users’, performance, to diagnose errors, and to understand its inner state. Many applications have only basic instrumentation technology embedded. For our purposes, we’d like to drill deeper and get a far more granular view into what analysts are doing, and for how long, and with what success. This will support both business decisions and the development of future AI/ML technologies.
The LAS would like to develop a technical methodology for designing and instrumenting a suite of applications supporting a given analyst task. As an exemplar task, we’d like to use the utility of speech-to-text (STT) data in search & discovery analysis workflows. In recent years, the LAS has produced baseline research into this task, including development of a taxonomy of STT actions. This taxonomy of STT actions will comprise the key target set of actions to be instrumented within the relevant application(s). Instrumentation for some of these actions will offer a challenge beyond a straightforward software development exercise, since some actions can only be inferred imprecisely within the current confines of the application. For example, suppose a user is perusing through results from a search engine, without supplementary technology such as eye-tracking it would seem impossible to precisely measure how much time the user spent reviewing each individual item in the search result.
Regarding application(s), the LAS does not currently have a one-tool-to-rule-them-all solution for STT tradecraft. The LAS does have a prototype called “CAP”, and another called “GUESS”, which can suffice as a basis for development and experimentation of instrumentation methodologies/technologies. However, it would be ideal if the team could enhance these applications to incorporate several more of the key actions to be instrumented. At the moment, CAP and GUESS enable analysts to investigate STT datasets through keyword, and semantic, searches. Analysts can use a variety of techniques to explore the results of these searches, and then iterate with additional searches into STT or complementary data sets.
The student team is asked to take a given set of applications, and a given set of key actions that analysts perform while using the applications, and design/implement a methodology by which to capture all practical, relevant, metrics regarding the analysts’ use of those actions. At a minimum, this will entail identification of useful instrumentation technologies (e.g. umami), and integration of such technologies into the prototype applications (and/or software development of instrumentations from scratch). In addition, we would like the team to use the lessons learned in this process to develop a strategy, and possibly even a software framework if applicable, for expanding these STT instrumentation integrations to additional tasks/applications. So, a generalized perspective on lessons learned and an identification of best practices in this space are also useful outcomes to document.
As an example of a business intelligence decision, consider the following. With STT workflows in particular, the data itself often contains errors due to errors in the STT algorithm output (typically resulting from low-quality audio, background noise, muddled speech, etc). It may be possible to invest in better STT algorithms to reduce these errors. Alternatively, those resources could be applied towards designing a better UI for the STT exploratory data analysis application itself to reduce the large amount of time analysts have to spend “fighting” with the application rather than exploring data seamlessly and quickly. Which of these options would prove more beneficial? Think of the information an organizational leader would like in front of him or her to help make such a decision, and think of how we could capture the data needed to produce that information.
Also noteworthy, the instrumentation data is expected to be useful for more than business intelligence. In particular, this data can support future development/training of AI models (recommendation algorithms, automation, etc) to assist the analysts. This may be important to keep in mind throughout this senior design project.
The team will have great freedom to explore, investigate, and design the system of instrumentation needed for STT actions, as well as enhancements to the given applications. However, the methodology employed should not have any restrictions (e.g. no enterprise licenses required). In general, we will need this technology to operate on commodity hardware and software environments. Beyond these constraints, technology choices will generally be considered design decisions left to the student team. The LAS will provide the student team with access to AWS resources for development, testing and experimentation. The LAS will also provide a suite of applications, and a set of key actions, to be instrumented.
ALSO NOTE: Public distributions of research performed in conjunction with USG persons or groups are subject to pre-publication review by the USG. In the case of the LAS, typically this review process is performed with great expediency, is transparent to research partners, and is of little to no consequence to the students.
The Laboratory for Analytic Sciences (LAS) is a research organization in support of the U.S. Government, working to develop new analytic tradecraft, techniques, and technology that help intelligence analysts better perform complex tasks. Processing large volumes of data is a foundational capability in support of many analysis tools and workflows. Any improvements to existing processes and procedures, whether they are measured in time, efficiency, or stability, can have significant and broad reaching impact on the intelligence community’s ability to supply decision-makers and operational stakeholders with accurate and timely information. One such improvement is the integration of AI into analyst workflows.
The aim of this project is to develop a content management system that stores the necessary data that stakeholders require to adjudicate an AI model entering into our production workflow. Created correctly, this application will allow AI model and dataset review members to submit new dataset information, AI model information, and AI model use-cases to track the AI adjudication process. Each member of the approval committee will be looking for specific information (e.g. the license the model uses is satisfactory) and they collectively approve AI use, approve once the required information is in place. When a new AI model request enters the system, prior approved dataset information can be used to speed up the request. With a database of adjudicated AI models, the system will also allow gneral users to search the system to see the results of previous AI models and datasets decisions.
For the current semester, we seek to build a user interface with a backend database with API connections with managed controls for both application users and application owners.
Feature 1 - AI Model Management System User Interface
As the owner of an AI adjudication process, I would like to have a user interface that allows me to submit an AI model or dataset into an AI management system. The system will clearly display the required fields, with description, so I can fully verify the AI model or dataset’s attributes. The user interface will allow me to submit the AI model or dataset metadata for reference by the whole adjudication committee. The user interface will allow all data scientists to search the AI models and datasets submitted by fields such as the model name, license, and current adjudication status to discover information about previously submitted AI models. AI models, when submitted, will be immediately set to the adjudication status of “submitted”.
Feature 2 - Database
The AI management system will include a backend database that will store all metadata submitted for each AI model or dataset. The database will also track the adjudication status of each submission to know if it is submitted, pending, declined or approved.
Feature 3 - API Access
Connections between the AI Management User Interface and the backend database will be administered through an API. We will need to track the date and time of each post and update to the database.
Feature 4 - Role Based Access
The AI adjudication process owner(s) will be able to submit AI models, edit the models that have been submitted, and will be able to edit the adjudication status. All users of the application will be able to search all AI models that are in the AI Management System, by model or dataset name and version, model modality type, license, adjudication status, purpose of the model, and submitter.
Feature 5 - Deployment
The AI Management System will be deployed using Docker.
The AI Management System is designed to be used immediately following this semester. Given this, most of the technology stack has already been chosen:
For testing, we would like the team to look at Cypress for end-to-end frontend testing.
ALSO NOTE: Public distributions of research performed in conjunction with USG persons or groups are subject to pre-publication review by the USG. In the case of the LAS, typically this review process is performed with great expediency, is transparent to research partners, and is of little to no consequence to the students.
Filtration is everywhere. We filter the air we breathe and the water we drink. Filtration can be found in our cars, refrigerators, and even in our showers and coffee machines. Without filtration, there is no baby food, no computers, no medicines, and certainly no clean, safe planet.
Can you imagine a world without filtration? A world where almost nothing is safe? Neither can we.
That's why we at MANN+HUMMEL work every day on filtration solutions for cleaner mobility, cleaner air, cleaner water, and cleaner performance & industry. As one of the world's largest filter manufacturers, we want to understand how this world works — and make it a little better every day. Our goal is to protect people, nature, and machines by using filtration to separate the useful from the harmful.
For more than 80 years, we have stood for leadership in filtration. As a family-owned filter company with German roots, we operate globally at more than 80 locations. For our filtration solutions, we use the combined know-how of our employees, a global research and development network, and the opportunities offered by digitalization. The future is being decided now. That is why today we provide filtration solutions for tomorrow.
In my department, we focus on IoT solutions for commercial-grade HVAC systems, paint booths, and other commercial machinery. One of the most requested solutions is a monitoring system featuring a set of sensors that determine current status, a mechanism for alerting users for different events, and some ability to predict the next required maintenance.
Many customers have specific machinery they work with and require specific IoT hardware and solutions to fit their use case. This has led to many individual solutions. It’s challenging to design ‘catch all’ solutions that are suitable for a wide variety of monitoring problems.
The NCSU senior design team will be developing a web application that uses image processing and AI to help solve a large class of monitoring problems. It will take some initial state images of a machine, such as an HVAC or the inside of an HVAC unit. The user will then provide a list of states they want to know about such as ‘is running’, ‘has a panel open’, ‘has a maintenance person in the room’, ‘is on fire’, etc.
Then on a regular schedule, this solution would receive an updated image of the machine or room. Using GPT vision, it would give a confidence score on each identified state.
For example, imagine maintenance worker Bob in charge of 50 HVAC units across 6 buildings, each a 10-minute walk from each other. He and his staff need a method for making sure every machine is working correctly, and if there is a problem he needs to know right away so he can resolve the issue. Using this solution, he has set up video feeds of his machines and automatically captures images to be fed to the system. If the system was given an image of an HVAC unit with one of its access panels slightly ajar and a maintenance worker standing nearby, it might respond with something like the following:
The most essential features of this project are the ability to configure it for a specific task, interact with GPT and provide an assessment of the state given an image. Other functionality, while important, is a lower priority.
This should be mostly written in Python and rely on OpenAI and GPT Vision. Frontend should be written with NextJS and Typescript. The sponsor will be able to provide API keys for third-party services used in the solution.
We expect typing and linting tools to be used for any code written and if applicable unit tests as well.
Dr. DK Xu and Dr. Yuchen Liu are collaborating on designing and prototyping an adaptive chat platform to understand, develop, and visualize computer networks. Dr. DK Xu and Dr. Yuchen Liu are professors at NCSU CSC specializing in artificial intelligence and computer networks. Their mission is to advance the field of network communication through innovative AI technologies, making complex network management more accessible and efficient. The ChatNet project aligns with their goals by leveraging cutting-edge AI to enhance understanding and visualization of network systems.
Imagine you are a network engineer or a student studying computer networks, and you encounter a problem or need to understand a complex concept related to network communication. Traditionally, you might search online for information, but the results are often a mix of outdated, irrelevant, or overly complex content. This challenge is compounded by the rapid evolution of network technologies and the increasing complexity of network systems.
The rise of artificial intelligence (AI) and large language models (LLMs) has transformed the way we interact with information. These models, like OpenAI's GPT series, have demonstrated remarkable capabilities in understanding and generating human-like text, making them ideal for interactive learning and problem-solving. However, these models still face challenges in filtering noise from the internet and providing precise, context-aware responses. To overcome these limitations, Retrieval-Augmented Generation (RAG) combines the strengths of retrieval-based and generative models. This approach allows the system to retrieve relevant information from a vast corpus and generate accurate, coherent responses tailored to user queries. RAG enhances the ability of LLMs to deliver high-quality, specific information, making it particularly suitable for specialized fields like computer networks.
In this project, we aim to conduct an exhaustive study on LLMs’ comprehension of computer networks. We will formulate three progressive questions to determine: 1) whether LLMs can provide correct answers when supplied with basic computer network questions; 2) whether LLMs can help make decisions (e.g., for network anomaly detection) based on the basic knowledge and given measurement datasets; and 3) whether LLMs can provide graphical representations of network topologies. To assess these capabilities, we will develop an adaptive
chat platform, ChatNet, by employing GPT and open-source models to facilitate interactive queries, automate network analysis, and generate visual network representations. ChatNet will enable users to pose natural language questions, receive detailed explanations, and visualize network structures, making complex network management tasks more accessible and efficient.
For example, a user might be able to ask questions like the following and receive natural language answers that are informed by logs and other information about network organization and health.
User Query: My network and connection are down. What’s happening?
Sample Response: Yes, your network is experiencing a Denial-of-service attack.
User Query: Is there any congestion in my network?
Sample Response: Yes. Between 4:30 and 6:30 pm, your network has too much load on the D7 subnet, based on the log data and your current network topology
ChatNet aims to leverage these advanced AI technologies to create an adaptive chat platform that understands, develops, and visualizes computer networks. The project will deliver three levels of functionality, with the third being a stretch goal:
Basic Knowledge Exchange:
Advanced Problem Solving:
Visualization (stretch goal):
We anticipate that the best platform for this work would be a web-based application similar to existing online chat tools but specialized for computer networks and communication. The key features and components will include:
The Fall 2024 senior design team will prototype an app with the following deliverables:
Front-End Development:
Back-End Functionality:
Database Design:
Additional Features:
Explore voice interaction capabilities to enhance user experience (if feasible).Create visualizations of network configurations and performance metrics within the chat interface.
Design scenarios, wireframes, and interaction flow to connect users with similar interests and goals in computer network and communication domains.
Through a collaboration between Poole College of Management and NC State University Libraries, we aim to create an innovative open educational resource. Jenn, a recipient of the Alt-Textbook Grant, has already developed an open source textbook for MIE 310 Introduction to Entrepreneurship. Our upcoming focus now involves creating open source mini-simulations as the next phase of this initiative.
Currently, there is a significant lack of freely accessible simulations that effectively boost student engagement and enrich learning outcomes within educational settings. Many existing simulations are typically bundled with expensive textbooks or necessitate additional purchases. An absence of interactive simulations in an Entrepreneurship course diminishes student engagement, limits practical skill development, and provides a more passive learning experience focused on theory rather than real-world application. This can reduce motivation and readiness for entrepreneurial challenges post-graduation.
Our primary goal is to develop an open source simulation platform that initially supports the MIE 310 course, but could be later made accessible to all faculty members at NC State and eventually across diverse educational institutions.
The envisioned software tool is a versatile open source mini-simulation tool designed to empower educators. Faculty will be able to develop interactive learning modules tailored to their teaching needs. This tool will be fully customizable by faculty who will be able to do the following:
We are flexible with the platform used to create the mini-simulations. Overall, it must be created with the intent of being an open-source tool which is free for instructors to login to, create simulations and free for students to access their course content.
Impartial is a two-year-old 501c3 that serves US criminal justice non-profits. The way we do that is through programs like: justice video games, prison artwork, memberships, and more. Our vision is for Americans to evolve the US criminal justice system to align more closely with facts and justice. We want to teach law students and others the real issues and conditions of our existing criminal justice system to prepare them for leadership and involvement.
The US criminal justice system is fraught with inconsistencies and inequities that we have created and/or allowed. Criminal justice non-profits are agents of change that often give the best results to fill in the gaps and correct errors. The next generation of criminal justice leaders need to be apprised of what the criminal justice playing field is.
Experiencing an actual prosecution gives those interested parties firsthand knowledge of all the twists, turns and opportunities. It affords people practicing to be attorneys a chance to make mistakes and correct them. It affords people who don’t understand our criminal justice system a chance to get involved in issues they didn’t know existed. When you begin to break down the criminal justice system into pieces and choices, you begin to understand where the problem and opportunities lie. For those that are already in the criminal justice field, it is a chance to play the game the way you want to or see what happens in the alternatives.
Our criminal justice system is a process. At each juncture there is an opportunity for the decision makers to follow the facts and law, correct their predecessors or misjudge the case and make decisions that may further compound others’ decisions. Sometimes the defendant has choices. Sometimes the prosecutor has choices. Sometimes the court has choices. When choices are not well known, evaluated and chosen, everyone suffers, but usually it is the defendant who suffers the most.
We have previously created Investigation and Grand Jury games that initiate a criminal case. They are being copyrighted, verified and tested.
The best solution for players is to experience what choices/decisions affect the outcomes of justice. This game is specifically the Arraignment and First Plea Deal. When a Grand Jury has handed down an indictment, that means that someone is officially charged with a crime. As a player, you make choices and in time, learn the far-reaching effects of those decisions.
Plead guilty or not guilty. Negotiate or accept a plea deal or move on to a trial. This game is focused on an actual case that was tried in a NC federal court.
Setting up the arraignment discussion, hearing and outcome along with a discussion about the plea deal game allows the player to have choices that are most compelling and logical to create fun and teachable moments.
How should the defendant plead? What factors should or does the defendant consider in order to make a decision about pleading? Why does the player make that choice? Will the defendant be allowed to stay out of prison until the trial or be remanded? What exculpatory evidence does the player have? What are the far reaching consequences of pleading guilty? Can the defendant endure a trial financially, mentally, physically, etc… Is there a chance that the charges could be dropped or reduced?
Choose the wrong path and you are locked up. Choose the right path and you may be locked up too.
We will pick up with the “Arraignment and Plea Deal” where “The Grand Jury” left off. (The first game was “Investigation”). The Investigation game has been created as a 2D game with Unity, Fungus and GitHub. The second game, “the Grand Jury “ was created by 2024 Spring NCSU Capstone students. We are open to supplementing the technology with additional selections as needed or warranted.
The assets built in “The Investigation” and the “Grand Jury” include conference rooms, offices, airport scenes and all the characters (about 6). Additional assets can/will be added to ensure that the game is well supported.
Justice Well Played will be featured on our website when it is playable and/or when opportunities for others to join in on its development (testing, for example) are available.
The game is owned by Impartial (a non-profit) and will be used as a source of revenue for Impartial’s operation or donated to other criminal justice non-profits. Students who help to build the Arraignment/Plea Deal game will be able to use that game to demonstrate their participation and credentials but not to make income or allow anyone or any entity to do so. We will need a written agreement substantiating that understanding.
To the extent that any student has ideas that add value to the existing or potential game, we are very interested in your thoughts. This is an extremely collaborative endeavor. Thank you.
Katabasis is a non-profit organization that specializes in developing educational software for children ages 8-15. Our mission is to facilitate learning, inspire curiosity, and catalyze growth in every member of our community by building a digital learning ecosystem that adapts to the individual, fosters collaboration, and cultivates a mindset of growth and reflection.
For beginner students in computer science, learning to program can feel like an intimidating and unapproachable challenge. The translation of ideas to code requires a specific framework of thought which may not be immediately intuitive to students and can therefore make it difficult for beginners to jump into coding from scratch.
By instead providing existing code that contains bugs and asking students to debug these programs, we can bridge the gap between presenting a coding problem statement and finding its solution. This strategy will allow students to grasp programming concepts without necessarily having to write code from scratch, making coding a more approachable educational experience for beginners.
We are seeking a group of students to develop a puzzle game that reinforces debugging and programming concepts, leveraging an existing block-based programming API. This game will be targeted to upper middle school and early high school students for primary use in technology classrooms.
The game will position the player as a trainyard manager who is in charge of directing trains to various destinations. For each level, trains will each have an initial and target number of passengers, an initial and target amount of cargo, a target station, and possible additional required stops. As the trainyard manager, the player is in charge of programming the track junctions trains will travel across. This will primarily consist of programming behaviors for when tracks should switch direction and triggers for when trains should be released from stations. This will add puzzle complexity by requiring the player to manage collisions between trains, which will act as a failure state.
By defining the behavior of the tracks and other constructs/buildings through code, the challenge to the player is to avoid collisions between trains while simultaneously allowing them to reach all of their targets. Goals for the game will vary level by level, and can require stopping at certain stations in a certain order, just avoiding collisions, or making it to a destination in the minimum amount of time.
The game will consist of many puzzle-based levels. Each level will come with pre-existing code that acts as a partial solution, with some (or many) bugs present for the player to debug to complete the level. For each level, the team will be responsible for designing a puzzle and developing the partial solution code provided to the player. Each level should represent an interesting challenge, either from a puzzle or a conceptual computer science standpoint. Over the course of the game, we are seeking for players to learn about various beginner-level programming concepts, such as functions, boolean values, loops, if/else statements, and state. These concepts should be reflected in level design and should scaffold up (build off of) to more advanced topics/puzzles.
The game UI will be presented with a coding interface alongside the main game/puzzles themselves so that the player can construct/edit code while viewing the level they are trying to solve.
The touchstones for this project are Scratch, Human Resource Machine, Rush Hour, and Mini Metro.
In summary, the game should be developed around the following core feature set:
Students will be required to use Unity (version ≥2022.3.23f1) for this project to leverage the existing block-based programming API.
Katabasis is a non-profit organization that specializes in developing educational software for children ages 8-15. Our mission is to facilitate learning, inspire curiosity, and catalyze growth in every member of our community by building a digital learning ecosystem that adapts to the individual, fosters collaboration, and cultivates a mindset of growth and reflection.
As students make the transition from primary to secondary school, the choices they make have a progressively more significant impact on their lives. In high school, a student’s grades can determine whether they get into college, which can open the door to countless professions and high-paying jobs. As a young adult, investing early could mean amassing much greater wealth when the student is ready to retire. Unfortunately, not all students have access to comprehensive financial literacy education and are unaware of the options available to them. Students are left wondering: Is college the right choice for me? How do I ensure my own financial stability? How can I support myself and my loved ones as I pursue my personal goals?
Katabasis wants to offer a way for students to explore different life paths and make big decisions in a safe environment where they can see the results of their choices.
Katabasis seeks to develop a financial literacy swiping game for students in 8th-12th grade. Players will start their journey as teenagers and progress through various life phases, swiping left or right to make choices that increase their skill meters, grow their assets, collect achievements, and circumvent financial emergencies. Throughout gameplay, players will be given opportunities such as the pursuit of higher education, the creation and maintenance of an investment portfolio, engagement in clubs and hobbies, and decisions related to their family and household. Each choice a player makes will open or close doors down the road and will affect the different kinds of currency (monetary, social, skill-based) at their disposal. The objective of the game is to amass as much wealth as possible while achieving personal life goals.
This project’s touchstone is the swiping game Reigns. For this game, we are looking for delivery of the following features:
This project must be built in the game engine Godot using GDScript. Additionally, the game must be built with WebGL support.
Additionally, this game must be able to run on relatively low-end systems. Be mindful when utilizing art or other assets that proper licensing is taken to use them (even in a potentially commercial setting).
ShareFile is a leading provider of secure file sharing, storage, and collaboration solutions for businesses of all sizes. Founded in 2005 and acquired by Citrix Systems in 2011, ShareFile has grown to become a trusted name in the realm of enterprise data management. The platform is designed to streamline workflows, enhance productivity, and ensure the security of sensitive information, catering to a diverse range of industries including finance, healthcare, legal, and manufacturing.
In an era where data breaches and cyber threats are escalating in both frequency and sophistication, ShareFile places a premium on security. The company has built its reputation on providing robust, reliable, and secure solutions that enable businesses to operate with confidence in the digital landscape.
In the increasingly interconnected digital world, cybersecurity has become a paramount concern for organizations and individuals alike. As cyber threats evolve in complexity and frequency, traditional security measures often fall short in providing adequate protection against sophisticated attacks. To address this gap, the concept of a honeypot—a decoy system designed to attract, detect, and analyze malicious activity—has emerged as a valuable tool in the cybersecurity arsenal.
Building a honeypot on Amazon Web Services (AWS) presents an excellent opportunity for students to gain practical experience in cloud computing, cybersecurity, and system administration. AWS, with its robust infrastructure and extensive suite of services, offers an ideal platform for deploying scalable and flexible honeypots. By undertaking this project, students can develop a deeper understanding of modern security threats, enhance their technical skills, and contribute to the broader cybersecurity community by providing insights and data on emerging attack vectors.
The rise in cyberattacks targeting cloud environments necessitates the development of innovative security solutions to protect sensitive data and critical infrastructure. Traditional security mechanisms, such as firewalls and intrusion detection systems, often fail to provide early warning signs of new and sophisticated attack techniques. This gap in security leaves organizations vulnerable to breaches that can result in significant financial and reputational damage.
To address this challenge, the proposed student project aims to design, implement, and evaluate a honeypot system on AWS. The primary objectives of this project are:
By achieving these objectives, the project will not only bolster the students' technical competencies but also provide valuable insights into the evolving landscape of cyber threats, ultimately contributing to more robust and proactive security measures for cloud-based environments.
The student team will develop a cloud-based honeypot solution on AWS to detect, analyze, and respond to cyber threats targeting an enterprise’s cloud infrastructure. This solution will simulate a real production environment to attract attackers, capturing detailed information on their methods and techniques. The data collected will be analyzed to improve security measures and protect the actual cloud infrastructure.
The proposed solution will include the following components:
The proposed cloud-based honeypot solution uses AWS to create a realistic and scalable decoy environment, capturing valuable data on cyber threats. Custom honeypot software simulates vulnerable services and logs attacker activities. Analyzing this data provides actionable insights to enhance security measures, making the project beneficial for both educational purposes and real-world application.
This solution will be deployed on AWS as a custom honeypot piece of software. The team should expect to look at prior art, an important part of software engineering, and understand the expectations of a honeypot. Below are some of the expected pieces of technologies you’ll work with and while these are the recommendations the team will be empowered to look at alternative options.
Dr. Srougi is an associate professor (NCSU- Biotechnology Program/Dept of Molecular Biomedical Sciences) whose research interests are to enhance STEM laboratory skills training through use of innovative pedagogical strategies. Most recently, she has worked with a team to develop an interactive, immersive and accessible virtual simulation to aid in the development of student competencies in modern molecular biotechnology laboratory techniques.
Biopharmaceutical manufacturing requires specialized expertise, both to design and implement processes that are compliant with good manufacturing practice (GMP). Design and execution of these processes, therefore, requires that the current and future biopharmaceutical workforce understands the fundamentals of both molecular biology and biotechnology. While there is significant value in teaching lab techniques in a hands-on environment, the necessary lab infrastructure is not always available to students. Moreover, it is clear that while online learning works well for conceptual knowledge, there are still challenges on how to best convey traditional ‘hands-on’ skills to a virtual workforce to support current and future biotechnology requirements. The need for highly skilled employees in these areas is only increasing. Therefore, to address current and future needs, we seek to develop virtual reality minigames of key laboratory and biotechnology skills geared towards workforce training for both students and professionals.
The project team has previously created an interactive browser based simulation in a key biotechnology laboratory skill set: sterile cell culture techniques. This learning tool is geared towards university students and professionals. In the proposed project, we intend to develop 3 virtual reality minigames to reinforce the fundamental skills required to perform more advanced laboratory procedures that are represented in the simulation.
Minigame content: All minigames will feature the following core laboratory competencies that would benefit exclusively from advanced interactivity and realism: 1) how to accurately use a single-channel set of pipettes, 2) how to accurately use a pipet aid, and 3) how to accurately load samples into an SDS-PAGE gel.
Length and Interactivity: Minigames should aim to be around a 10-15 min experience. The games should allow users free choice to explore and engage in the technique while providing real-time feedback to correct any errors in user behavior. They should be adaptable for future use with biohaptic feedback technology to provide a ‘real world’ digital training experience.
Cohesion: The set of minigames should connect to themes and design represented in the virtual browser-based simulation previously developed. Therefore, the visual design of the minigames should closely match the real-world laboratory environment.
Students working on this project do not need to have the content knowledge of biotechnology or biotechnology laboratory skills. However, a basic interest in the biological sciences and/or biotechnology is preferred. This project will be a virtual reality extension of a browser based interactive simulation written in 3JS within a GitHub repository. Development of the minigames should be built in Unity, but there can be flexibility in game engine choice. Games should be designed to be run on relatively low-end computer systems. Proper licensing permissions are required if art and/or other assets are used in game development.
Dr. Stallmann is a professor (NCSU-CSC) whose primary research interests include graph algorithms, graph drawing, and algorithm animation. His main contribution to graph algorithm animation has been to make the development of compelling animations accessible to students and researchers. See mfms.wordpress.ncsu.edu for more information about Dr. Stallmann.
Background.
Galant (Graph algorithm animation tool, pronounced “gǝ-LAHNT”) is a general-purpose tool for writing animations of graph algorithms. More than 50 algorithms have been implemented using Galant, both for classroom use and for research.
The primary advantage of Galant is the ease of developing new animations using a language that resembles algorithm pseudocode and includes simple function calls to create animation effects.
The most common workflow is
Problem statement.
There are currently two versions of Galant: (a) a sophisticated, complex Java version that requires git, Apache ant, and runtime access to a Java compiler; (b) a web-based version, galant-js, was developed by a Spring 2023 Senior Design Team and enhanced by teams in Fall 2023 and Spring 2024. The latter has been used in the classroom (Discrete Math) and several algorithms have been successfully implemented. However, there are some major (and minor) inconveniences from a usability perspective.
Some enhancements are required to put the useability of galant-js on par with the original Java version. The Java version has been used extensively in the classroom and in Dr. Stallmann’s research. The JavaScript version already has clear advantages.
The biggest challenge is to establish a mapping between physical positions of nodes on the screen and logical positions of nodes in a file that describes a graph, for example, when scaling the window or viewport. For most graphs this is simply a matter of keeping track of a scale factor and doing the appropriate transformations during editing (and when an algorithm moves nodes). There are, however, special graphs whose nodes are points on an integer grid. The mapping must be maintained both during editing and algorithm execution.
Other usability enhancements are related to the user interface: placement of menus, improving keyboard shortcuts, and more general accessibility features.
And there are enhancements related to algorithm implementation and execution.
A detailed list of desired enhancements is in feature-requests.md at the root of the repository in the dev
branch.
To run the current version, go to https://galant.csc.ncsu.edu
The Java version can be downloaded at https://github.com/mfms-ncsu/galant
Students are required to learn and use JavaScript effectively. The current JavaScript implementation uses React and Cytoscape for user interaction and graph drawing, respectively. An understanding of Cytoscape, in particular, will be required to address the challenge related to node positions on the screen.
The tailwind
CSS library is used for styling.
2025 | Spring | ||
2024 | Spring | Fall | |
2023 | Spring | Fall | |
2022 | Spring | Fall | |
2021 | Spring | Fall | |
2020 | Spring | Fall | |
2019 | Spring | Fall | |
2018 | Spring | Fall | |
2017 | Spring | Fall | |
2016 | Spring | Fall | |
2015 | Spring | Fall | |
2014 | Spring | Fall | |
2013 | Spring | Fall | |
2012 | Spring | Fall | |
2011 | Spring | Fall | |
2010 | Spring | Fall | |
2009 | Spring | Fall | |
2008 | Spring | Fall | |
2007 | Spring | Fall | Summer |
2006 | Spring | Fall | |
2005 | Spring | Fall | |
2004 | Spring | Fall | Summer |
2003 | Spring | Fall | |
2002 | Spring | Fall | |
2001 | Spring | Fall |