Click on a project to read its description.
There are a lot of savings opportunities for Blue Cross NC specifically around Hierarchical Condition Categories (HCC) Coding, a payment model mandated by the Balanced Budget Act of 1997 (BBA) and implemented by the Centers for Medicare and Medicaid Services. This is used in Risk Adjustment modeling based on two main factors: demographics and health status. BCBS of NC currently leverages the services of a vendor to assist in Risk Adjustment modeling and pays huge fees for a process that has some shortcomings. The higher operating costs, as well as potential transactional errors, are quite burdensome. Any application that can improve member/patient level demographics and diagnoses to determine a Risk Adjustment Factor (RAF) score will enhance our revenue and reduce operational cost. Risk assessment data is based on the diagnosis information pulled from claims and medical records which are collected by physician offices, hospital inpatient visits and in outpatient settings.
Build a web application that demonstrates the use of FHIR data. Set up a connection to an open Electronic Health Record implementation (there are several industry sandboxes such as SMART HEALTH IT, EPIC, etc.) and demonstrate how access of specific data could enable insights to drive improved health outcomes for populations suffering from specific conditions or illnesses.
Familiarity with JSON, XML, and RDF will be great; clinical and modeling (AI/machine learning – stochastic processes) knowledge is a plus but not required.
Software development organizations of all kinds want to understand how their software is being worked on, because that allows them to make better decisions. Information about development activity can identify problems in code or organization before they become schedule problems.
In Senior Design, we want to know if teams are collaborating well so we can help out that team. In industry, we want to know if certain files are becoming problematic to work on and to understand our development behaviors over time. For open source projects, we want to know who the best outside contributors are and whether certain areas of the code are contributed to more than others.
While there have been some tools to analyze repository data (such as "gitstats"), usually they operate on just one repository at a time. We want to build an easy-to-use tool that allows easy comparison of multiple repositories and offers still more statistics.
Per-user, over time, track and graph:
In general:
Per file extension:
The system should be able to generate both graphical and tabular reports, allowing as many types of comparisons both inside of repositories and outside as possible.
GitHub username/passwords should be securely stored on the database server for each user. Supporting only https:// access to GitHub (versus SSH) is acceptable.
Going to town on the analytics and exploring everything that is interesting, including source code scanning. There's really no limits once the basic parts of the project are implemented.
As we intend to use this project for the SDC, and therefore will emphasize exceptionally clean and extensible code throughout the development process, sometimes at the expense of development velocity.
Duke Energy’s meteorology group uses meteorological and utility data to provide air dispersion and weather-related guidance to the enterprise. Meteorological data is used for predicting energy usage, wholesale energy trading, and environmental annual reporting. Inputs are gathered from multiple sources, both internal and external to Duke, for analyses and to provide model inputs. Once gathered, the data undergoes a thorough review to flag anomalies and ensure quality. The NCSU team will build an engine to assist the user in aggregating multiple meteorological data files, perform data validations, calculations, data reformatting, and exporting user-selected variables.
MyMET Calculation Engine should be implemented as a web application. The application will capture data from multiple sources into a central database. Sample data files will be provided for input to MyMET. The application should allow for automated upload of the data. The basic flow is as follows:
Four basic types of data will be provided, each with multiple files. Descriptions are as follows:
The application should be web-based with a relational database.
Full documentation is expected. This includes:
The Fidelity Plan Sponsor Webstation (PSW) is a web-based application used by Fidelity clients to manage and administer their workplace benefits. Whereas employees of Fidelity clients view their individual benefits through NetBenefits, PSW is the site used by benefits administrators to manage the benefits that are offered to their employees. PSW supports the following workplace benefits: Defined Contribution, Defined Benefit (pension), equity compensation, Health Savings Accounts, health & insurance, and Student Debt Employer Contribution.
PSW was originally developed as a Java web application in 2002-2003. It uses an on-premise relational database for storing user profile data. Rather than leverage web services, our legacy applications connect directly to the database to run stored procedures to create and update a user profile. We view the current database (Sybase) and the architectural design (direct database connections) as legacy technology and we are seeking to modernize. As we move our capabilities to the cloud, we are looking to implement micro-services, which would involve creating a new web service that acts as the interface to the underlying data store. After creation of a web service and underlying data store, our existing applications would be modernized to interface with the profile data through the web service.
This project offers the opportunity to work on a cloud hosted solution to a real-world problem. As part of a full stack team working in an Agile fashion, you will gain valuable experience as well as exposure to multiple technologies, which will make you more attractive to potential employers after you graduate.
Your project must do the following.
While not required, you may choose to augment your project by doing any/all of the following.
The solution should use the following technologies
This solution will help us move one of our core capabilities to the cloud. Having an insulated micro-service architecture will make the profile more adaptable in the future and will speed adoption by future applications.
A great amount of innovation happens outside of day to day projects by people who are passionate about identifying problems and coming up with innovative solutions. However, a lot of great ideas go untapped due to factors such as: it is outside the scope of the project, there is no direct funding, or just don’t have a team.
Build a web application that solves the puzzle where an idea transcends from a simple brain spark to reality. Connect problem solvers who have the passion, the correct skills, and will make the time to help. Create dynamic matching and notification based on persona, domain, skill set, availability, etc. Create opportunities for associates to showcase and expand their skill sets without affecting their regular work velocity while increasing employee engagement. This application will provide a great opportunity to not just submit cool ideas but also follow the idea through its lifecycle from inception to completion. A typical lifecycle for an Idea would be
For an idea to become a project, you will need:
Once an idea becomes a project, it can have these statuses associated with it: “Looking For Core Team”, “Actively Being Worked On”, “Looking For Volunteer Help”, “Successfully Finished”
Once an idea becomes a project, it will show up on the projects page and will have a status of “Looking for Core Team”. This is when people will be able to view your project and apply to be a part of the Core Team. If people are interested but don’t want to be part of the core team, they can “watch” the project and will be notified of any status changes.
Following Personas can be used for multiple use cases.
Fidelity already has a Proof-of-Concept UI available, and the team can use it to start their UI Design work
Web Technologies: Use Angular, Java 8 or higher, Tomcat 8.5 or higher
Database: You can choose any database and be able to explain the rationale why team picked that solution
Our research team, consisting of researchers at NCSU and Intel Corporation, is developing decision support tools to help management understand the issues arising in capacity management during new product introductions. This project seeks to develop a prototype of a role-playing game where managers of different organizations involved in new product introductions can assess the impact of their own decisions on the performance of their organization, that of other organizations, and the firm as a whole.
The two principal organizational units involved in new product introductions in high tech firms, such as semiconductor manufacturers, are the Manufacturing (MFG) unit and a number of Product Engineering (ENG) units. Each Product Engineering unit is charged with developing new products for a different market segment, such as microprocessors, memory, mobile etc. The Manufacturing unit receives demand forecasts from the Sales organization and is charged with producing devices to meet demand in a timely manner. The primary constraint on the Manufacturing unit is limited production capacity; no more than a specified number of devices of all sorts can be manufactured in a given month. The Product Engineering units have limited development resources in the form of computing capability (for circuit simulation) and number of skilled engineers to carry out design work. Each of these constraints can, to a first approximation, be expressed as a limited number of hours of each resource available in a given month.
The Product Engineering groups design new products based on requests from their Sales group. The first phase of this process takes place in design space, beginning with transistor layout and culminating in full product simulation. The second phase, post-silicon validation, is initiated by a request to Manufacturing to build a number of hardware prototypes. Once Manufacturing delivers these prototypes, the Engineering group can begin testing. This usually results in bug detection and design repair, followed by a second request to Manufacturing for prototypes of the improved design. Two cycles of prototype testing, bug detection and design repair are usually enough to initiate high-volume production of the new product. Especially complex products or those containing new technology may require more than two cycles.
The Manufacturing and Product Engineering groups are thus mutually dependent. capacity allocated by Manufacturing to prototypes for the Product Engineering groups consumes capacity that could be used for revenue-generating products, reducing short-term revenue. On the other hand, if the development of new products is delayed by lack of access to capacity for prototype fabrication, new products will not complete development on time, leaving the firm without saleable products and vulnerable to competition.
We seek the development of an educational computer game where students assume the roles of MFG or ENG managers to make resource allocation decisions. The initial module of the game would focus on a single MFG and ENG units. Resource allocation decisions will be made manually, giving the players of the game a feel for the unanticipated effects of seemingly obvious decisions.
The game will have one MFG player and can have multiple ENG players, with each player trying to maximize their own objective function. We shall assume for sake of exposition one player of each type, and a given number of time periods T in which each player must make its resource allocation decisions.
Data common to all players:
T: number of time periods for which decisions must be made, t = 1,...,T
N: number of products to be considered for both production and development.
The Problem for the MFG Player:
The Problem for the ENG Player
Thoughts on Game Structure:
Two players (ENG and MFG) take turns, with each turn resulting in decisions for a given period. each player would have access to the state of their world at the start of the current period. For MFG this would be current inventories of each of their products, current backlogs of each product, and demands for the products for the next few periods. For ENG, the state of the world would include their current resource levels, the degree of completion of products currently in development (i.e., which subtasks of which stage have been completed, or are partially completed).
Lots of variations are possible, and we would like to leave the maximum flexibility to enhance the game by adding more sophisticated procedures. Our NSF research is looking at auction procedures that use resource prices obtained from approximate solutions to optimization models by each player - a price-based coordination solution. So, we would ideally like to be able to use this game engine to simulate the solutions we get from our auction procedures and compare them to the solutions obtained by players making decisions manually.
In terms of screens, the most important requirement would be for a screen where each player can see the current state of their world, and evaluate the potential future impacts of different decisions before committing to them. This would require constraint checking and cost evaluation for both players. There should also be some means for the two players to communicate their offers and counteroffers.
A final requirement would be to include a corporate scorecard whose purpose is to assess the impact of the decisions on the firm overall, as opposed to for the individual players. The idea here is to examine situations where a given player’s objective leads it to make decisions that are good for their own short-term objectives but leave the other player in a very difficult position causing the firm to lose money in the longer term.
We expect the project team to educate us about what is possible and what alternative approaches may be taken from the gaming side, and we look forward to working together to come up with an interesting prototype that will allow us to explore this complex but very important problem.
For more than 40 years Microsoft has been a world leader in software solutions, driven by the goal of empowering every person and organization to achieve more. They are a world leader in open-source contributions and, despite being one of the most valuable companies in the world, have made philanthropy a cornerstone of their corporate culture.
While primarily known for their software products, Microsoft has delved more and more into hardware development over the years with the release of the Xbox game consoles, HoloLens, Surface Books and laptops, and Azure cloud platform. They are currently undertaking the development of the world’s only scalable quantum computing solution. This revolutionary technology will allow the computation of problems that would take a lifetime to solve on today's most advanced computers, allowing people to find answers to scientific questions previously thought unanswerable.
One of the necessary components of any quantum computing system is a hardware control plane to actually communicate with the quantum bits (qubits). Development of hardware requires the generation and analysis of significant amounts of data, and this control plane is no different. After the logic has been developed, the next step is the synthesis of logic gates in order to analyze timing results. After gate analysis is done, timing reports are generated, and validation occurs to make sure that the values are within specifications. Iterations are performed to improve the results, failures are assigned to users, fixes are put in place to resolve issues, and reports are re-generated. The goal of the project is to improve that flow by simplifying the interaction needed to perform these actions.
The storage technology that is used (ex. MySQL, NoSQL, filesystem) is to be decided by the development team. Flask must be used as the web framework.
The development team will be provided with multiple sample reports, as well as guidance on what data exists in the reports and what files to extract the data from.
Students will be required to sign over IPR to Sponsor when team is formed.
NetApp Storage Systems collect performance and configuration meta-data about system use. These data describe performance solutions and the workloads that drive those solutions. This performance meta-data is called AutoSupport and is the information which drives NetApp’s cloud based analytics solution called NetApp Active IQ. Active IQ system telemetry data is anonymized and aggregated at a customer level. These data are made available in a structured format and indicate multiple features and applications in use on the system. Structured data in this use case means data that are included in a relational database or data that is well formatted (for example, XML).
The primary goal of this project is to write a program that uses Active IQ data to find groups of applications that normally run together at a given site or locality. If rules can be identified with high confidence, then the goal is to identify a prioritized list of customers who are not using applications and NetApp solutions in a beneficial way. A sales strategy could then be developed to reach out to those customers and promote NetApp storage solutions.
The high level steps and deliverables for this project are:
The goal of this project is to create an automatic speech recognition system that takes a continuous speech stream as input and transcribe the speech in real time. Automatic speech recognition (ASR) is a key technique in human-computer-interaction. An ASR system converts an input speech signal into text, which is further analyzed by downstream natural language processing modules. This technique enables a natural way of communication between human and computers. In this project, students will be participating in the implementation of a Continuous Speech Recognition (CSR) system based on deep neural networks models.
This project has the following two objectives with some subtasks as listed below:
To ensure the success of the project, experience with programming language Python is required, knowledge of C/C++/Java are optional. Knowledge and experience with some other techniques are preferred, including automatic speech recognition, deep neural networks, multi-thread programming, and cloud programming.
There are some available datasets to train speech recognition models, including WSJ0, WSJ1, LibriSpeech, and SAS Radio data.
TradeTec is a leading provider of supply chain, inventory and equipment management software for forestry businesses throughout the United States and Canada. The company’s proprietary suite of on-premise and SaaS-based solutions track both hardwood and softwood timber from harvest to production, while monitoring associated costs throughout each stage of the production cycle. With over 30 years of experience in the Forest Products Software Business, the company prides itself on deep industry knowledge leading to continuous innovation and outstanding customer support. TradeTec was founded in 1985 and is headquartered in Winston-Salem, NC.
Offer Sheet is a tool that allows salespeople to communicate pricing and/or available log or lumber stock to their prospective customers. Our current implementation is not as user friendly as we would like, involving emailing of actual pdf documents. We want a simpler lighter weight web-based solution which will offer easier access for all customers.
The salesperson running the application (TW Logs) at the sawmill. This person manages the current inventory and presents offers to the mill’s customers.
End user, a person responsible for purchasing wood. This user receives a link to the offer sheet from salesperson which contains a link to the offersheet (via email). The offersheet is an object identified by a URI. When the user clicks on the URI, they are taken to the offer-sheet, containing the offer.
A lightweight server will serve up the available offer sheets, and a browser-based frontend will display the selected offer sheet to the customer. The server will be a single page design app developed with 12 factor app design principles hosted adjacently to an existing customer implementation.
Backend: node.js / express app / MS SQL server for persistence
Frontend: vue / bootstrap with “flexible design” to support the browser based lightweight delivery.
The result of this project will streamline the delivery and communication of the offer sheets to their intended users and be accessible to the users with only a web browser from anywhere in the world.
Stretch goals:
Students will be required to sign over IP to Sponsor when team is formed.
Sample offer sheet generated as PDF:
Founded in Roanoke, VA in 1932, Advance Auto Parts (AAP) is a leading automotive aftermarket parts provider that serves both professional installer and do-it-yourself customers. Our family of companies operate more than 5,100 stores across the United States, Canada, Puerto Rico and the Virgin Islands under four brands, Advance Auto Parts, Carquest Auto Parts, WORLDPAC and Autopart International. Adapting the latest in data and computer science technology, Advance Auto Parts' Center for Insight and Analytics aims to bring machine learning and artificial intelligence to the automotive aftermarket parts market to turbo drive it forward into the digital era.
AAP knows a lot about cars – we have the best parts people in the industry! In addition to providing parts to consumers to address their vehicle maintenance and repair needs, we have numerous resources that can be used to help customers diagnose problems. Bringing the ability to detect and address vehicle problems
The crux of this project revolves around the design of the major system components and the interfaces between these components. A solid design is critical to the long-term utility of this application even as, for example, initially human-provided inputs are eventually replaced by automated or sensor-driven inputs. The desire is that such future updates would not require a refactoring of the system. A user interface will be required but will be considered secondary to the system design.
The core functionality of this app will be facilitating the input of updates to vehicle state, the utilization of those state changes to diagnose problems with the vehicle, the prescription of parts and labor needs to address those problems, and a user-facing list of those prescribed fixes.
We would propose that the bulk of the students’ time and attention be focused on the design of the key components and the interfaces between them. Sample data will be provided to underpin the Diagnosis and Prescription Engines as well as the Repair Job output but implementation beyond a well-designed API stub for each is not necessary.
The key UI elements will give users the ability to enter vehicle information and view the list of prescribed repair/maintenance needs. We encourage the students to think creatively about how to further integrate these prescriptions into other mobile user elements such as calendar and mapping apps.
Over the past decade, scientists and the public have become increasingly aware of declines in the health of managed honey bees, as well as populations of wild bees such as bumble bees. To help counteract these declines, many programs encourage individuals to install “pollinator gardens” on their properties. These habitats typically draw on lists of recommended, perennial plants known to provide pollen and nectar for bees. Although these habitats can be effective, in our local outreach and extension work, we have noted that garden design remains a barrier for many homeowners who are unsure how to create an aesthetically appealing garden using recommended pollinator plants.
To overcome this barrier, we would like to offer a web-based garden visualization app that lets homeowners map recommended plants into a garden space and view the virtual planting in three dimensions prior to installation. While similar apps do exist (for example, here), they include few if any suitable plants and lack other desirable features.
Based on work with the Senior Design Center in Fall 2018, a foundation for the envisioned app now exists. It allows users
Working from this foundation, we would like to add the following functions:
In time, we would like to add additional features to the app/website. In case it is useful to have these on the table from the start, they include the following functions for users:
During weekly sponsor meetings, we will review new developments in the app and provide feedback and clarification related to the goal of making it possible for anyone to design an attractive, bee-friendly garden.
Students will be required to sign over IP to Sponsor when team is formed.
BlackBerry QNX technology includes QNX Neutrino OS and many middleware components. BlackBerry has decades of experience in powering mission-critical embedded systems in automotive and other industries. As the leader in safety-certified, secure, and reliable software for the automotive industry, BlackBerry currently provides OEMs around the world with state-of-the-art technology to protect hardware, software, applications and end-to-end systems from cyberattacks.
For Self-Driving Cars, functional safety is part of the overall safety of a system or piece of equipment and generally focuses on electronics and related software. As vehicles become increasingly connected and reliant on software, new threats emerge. Therefore, it is imperative that this software operates safely, even when things go wrong. A self-driving car is an extremely complex system with state-of-the-art technologies. Proving that the system does what it is designed to do is a great challenge. And it must do so in a wide range of situations and weather conditions. This requires a stable, secure and efficient operating system.
To ensure mission critical reliability, BlackBerry QNX continually performs extensive automated testing of their software components by executing hundreds of thousands of tests daily. The current process of identifying trends, issues and generating charts and reports from these tests consume a tremendous amount of human effort.
Automating this process and generating visual representations of trends and issues would help to identify software issues as early as possible and enable employees to focus on other tasks.
The goal of this project is to create a Central Test Management Web Application, incorporating work from previous Senior Design teams. Developing a Test Management dashboard will be beneficial as a cross-platform solution where developers and testers have a central location where they can access various web based features, such as monitoring software quality trends, creating test lists based on project test requirements and generating test reports that can be easily customized for each test area and user.
The 2019 Spring project is responsible for creating the core Test Management Web Application using code from the 2018 Fall Senior Design project as a base. The team is not responsible for creating the various tools being called by the Test Management Web Application.
The team will be expected to:
* The client-side application should be capable of running in any modern web browser without requiring additional extensions.
Some prior experience developing web (server-client) applications is strongly recommended for this project.
Members of the team would be expected to have or learn some of the following skills:
The client-side dashboard must be written in a language supported by modern web browsers; a modern JavaScript framework is strongly suggested. Any additions or updates to current code base must run on Linux and Windows OS(es).
BlackBerry QNX mentors will provide support at the beginning of the project for a project overview, initial setup, and assistance in the software tool selection.
The students will also be provided with:
Mentors will be available to provide support, planning sessions, Q & A, technical discussions and general guidance throughout the project. Mentors will also be available for meetings on campus as necessary.
BlackBerry is an enterprise software and services company focused on securing and managing IoT endpoints. The company does this with BlackBerry Secure, an end-to-end Enterprise of Things platform, comprised of its enterprise communication and collaboration software and safety-certified embedded solutions.
Based in Waterloo, Ontario, BlackBerry was founded in 1984 and operates in North America, Europe, Asia, Australia, Middle East, Latin America and Africa. For more information visit BlackBerry.com
Customers rely on QNX to help build products that enhance their brand characteristics – innovative, high-quality, dependable. Global leaders like Cisco, Delphi, General Electric, Siemens, and Thales have discovered QNX Software Systems gives them the only software platform upon which to build reliable, scalable, and high-performance applications for markets such as telecommunications, automotive, medical instrumentation, automation, security, and more.
QNX software is now embedded in 120 million cars that are on the road today. Automotive OEMs and tier ones use BlackBerry QNX technology in the advanced driver assistance systems, digital instrument clusters, connectivity modules, handsfree systems, and infotainment systems that appear in car brands, including Audi, BMW, Ford, GM, Honda, Hyundai, Jaguar Land Rover, KIA, Maserati, Mercedes-Benz, Porsche, Toyota, and Volkswagen.
Students will be required to sign Non-Disclosure Agreements and sign over IP to BlackBerry when team is formed.
What is Bugle: Bugle is an application and website that enables volunteer event organizers to easily manage volunteer events. Bugle provides a robust suite of project management tools to simplify the unique challenge of organizing a volunteer event. Bugle also helps volunteers find service opportunities within their community. Volunteers can search for events by category, location, and time. Bugle’s services are free for organizations hosting volunteer events as well as volunteers looking for them. Bugle is a non-profit organization committed to making volunteering easier.
Users:
Concept: Interactive tracker is a feature of the Bugle app. During a volunteer event, the interactive tracker allows the event organizer and event team leaders to track the status of an event by annotating each task that has been completed, tasks that are in progress, and tasks that have yet to be started. The interactive tracker enables the event organizer to build an event timeline, input the details associated with completing a task, and assign tasks to event team leaders. This feature will be particularly helpful for larger events like Relay for Life, where an event coordinator is managing hundreds of volunteers and a wide range of resources.
Purpose: The interactive tracker assists in organizing event sequencing, while providing oversight to event leadership.
Functionality:
Technology: Mobile-app developed for Android and iOS written in React Native.
Students will be required to sign over IPR to Sponsor when team is formed.
Banks are actively looking for opportunities to leverage emerging technologies in the AI/ML space. The ability to understand how various factors influence the evolution of specific markets and model such outcomes in a predictive fashion is particularly intriguing, both academically and practically. While this problem can be generalized across various asset classes (i.e. forex, commodities, energy, etc…), students should identify a particular area of focus.
Please review the following high-level specifications and requirements. The below will be refined in greater detail during scoping with the team.
Students will use their course knowledge, creativity, and technical expertise to delve deep into the exciting world of AI/ML. DBGT will provide thought leadership, direction, guidelines, technical expertise (as needed) and support in refining the project scope. The DBGT team is here to aid and guide the student workflow, processes, and design ideas. We support and encourage the students’ input on this project.
Senior Design students in the College of Engineering Department of Computer Science will have a unique opportunity to partner together over the course of the semester to explore the exciting and developing field of AI/ML with direct application to a real business problem. Additionally, students will have access to industry professionals to assist in the software design, agile practices, and the overall code development and testing. Students will be allowed to share the final product as part of their own portfolio while job seeking.
Siemens Healthineers develops innovations that support better patient outcomes with greater efficiencies, giving providers the confidence they need to meet the clinical, operational and financial challenges of a changing healthcare landscape. As a global leader in medical imaging, laboratory diagnostics, and healthcare information technology, we have a keen understanding of the entire patient care continuum—from prevention and early detection to diagnosis and treatment.
At Siemens Healthineers, our purpose is to enable healthcare providers to increase value by empowering them on their journey towards expanding precision medicine, transforming care delivery, and improving patient experience, all enabled by digitalizing healthcare. An estimated 5 million patients globally benefit every day from our innovative technologies and services in the areas of diagnostic and therapeutic imaging, laboratory diagnostics and molecular medicine, as well as digital health and enterprise services. We are a leading medical technology company with over 170 years of experience and 18,000 patents globally. Through the dedication of more than 48,000 colleagues in over 70 countries, we will continue to innovate and shape the future of healthcare.
Preclarification is a remote based “troubleshooting” service Siemens offers its customers to diagnose and resolve reported equipment issues without the need for “on site” support. When an issue is reported, Siemens Healthineers creates a ticket known as a Notification. The Remote Services Center utilizes remote capabilities (established through a broadband internet based connection to either a customer-owned or Siemens provided secure end-point) to log directly into the customer’s system and investigate the issue. During Preclarification we will either resolve the Notification or develop a detailed action plan for our Customer Service Engineers to follow once they arrive onsite. This action plan will provide direct and concise technical repair recommendations (including potential parts replacement requirements) to reduce on site repair time.
Root Cause is determined and documented at the conclusion of each Notification by either the Remote Services Center or the Customer Service Engineer. This is critical since the issue reported and the actual root cause of the problem can differ greatly. Root Cause is captured in a database in the following three fields: Cause Code Group, Cause Code, and Cause Code Text. Collectively, these fields provide visibility to the product module affected and the work performed to correct it.
Accurately diagnosing customer reported issues has become increasingly challenging due to a diverse product portfolio and overall system complexity. Our objective is to improve Preclarification performance, drive Remote resolution rates, reduce onsite repair time, and most importantly increase customer workforce productivity and efficiency. In doing so, we will optimize instrument performance and reduce system downtime, which will greatly improve patient care. In order to achieve this we are seeking support in the area of predictive analytics. Specifically, our goal is to trend customer reported issues and predetermine root cause using documented data compiled from our customers, our Remote Services Center, and our Customer Service Engineers. With over 1,000 products and 500,000 potential areas of root cause, we are seeking your support in transforming how we predict and respond to our customers’ needs.
This project will be broken out into three distinct phases:
The first step is to join together the notification data with the root cause data in a database. This may require some preprocessing (e.g. tokenization, normalization) to prepare the data for exploration and experimentation in phase II. The choice of which tools and methodology you employ is up to you.
After the database of notification data and root cause data is created, the next step will be to use machine learning to predict the root cause of a given notification. We are flexible with how you solve this problem as long as all methods are clearly documented and explained.
We realize that as computer science undergraduates you might not have had a lot of exposure to natural language processing and machine learning. But if you are a student with some interest or experience in these areas this could be an excellent opportunity to combine your skills and talent to accomplish something very impactful. Because of the scope for this phase, Siemens is committed to pointing you in the right direction with your researching and working closely with you to ensure this project is solvable and successful.
Once you have created a satisfactory machine learning model, the final step is to build an intuitive user interface. This interface will let non-technical users classify the root cause of new notification data using the model you developed in phase II.
NOTE: Students will be required to sign Non-Disclosure Agreements and sign over IP to Siemens Healthineers when team is formed.
This dataset will contain text descriptions of the issues for each notification. The additional fields in the dataset will be used to classify and group similar machines for machine learning.
Each notification will have a cause code (with a corresponding text value) that represents the root cause. In this example, the root cause is HW67100 A051.
With highly skilled cybersecurity experts being in high demand, we are struggling to hire cybersecurity experts who are experienced and knowledgeable in finding security issues. Visualizing data to identify security-related anomalies could help increase efficiency and fun in the world of threat detection.
Using a game interface along with algorithmic artificial intelligence, developers would create an engine capable of generating an abstracted interface (world) that allows analysts to identify cyber threats.
Leveraging artificial intelligence processing techniques, assets and security events should be visualized in a logical, engaging way. A user should be able to see an asset and events relevant to the asset. Assets should represent a computer (physical or virtual), and events should be pulled from data sources that will be provided.
BB&T, through a partnership with Securonix, will provide a cloud data lake and can simulate various threats.
Students may be asked to sign over IP to sponsor when team is formed.
In this project, the team will have a chance to work on a leading-edge research area of cryptography: Quantum resistance cryptography. A quantum computer (QC) could break essentially all of the public key cryptography standards in use today: ECC, RSA, DSA and DH. Thus, if practical QCs became a reality, they would pose a threat to PKI-based signatures. For example, someone with a QC at some time in the future could impersonate a bank’s website by guessing the bank’s private key that corresponds to its public certificate. For those reasons, the National Institute of Technology (NIST) has started a PQ Crypto project in order to pick the quantum-secure algorithms for the future.
Quantum-secure algorithms do not come without a cost. They are more CPU intensive and introduce more communication overhead. Thus, today’s protocols cannot use them out of the shelf. Cisco Systems has been focusing on quantum secure signatures and their impact on common communication protocols like TLS, IKEv2 and SSH. Other big vendors like Google and Cloudfare have been looking into key exchange algorithms in TLS. Microsoft has also focused on key exchange and VPNs. Quantum-resistant crypto is a topic that has gotten great attention in academia, standards bodies (IETF, ETSI), government organizations and the industry.
In this senior design project, we would like the team to build a test application that enables us to benchmark the performance of TLS connections. Metrics we are mostly interested in are:
We would like to be able to run these benchmarks regardless of the signature algorithm that is used behind the scenes in the TLS handshake. For example, we would run it in plain TLS 1.3 with traditional signature algorithms. We would then run it by using picnic signatures and finally by using hybrid p256+picnic. That way, as more postquantum signature algorithms are implemented, we would be able to compare TLS performance in the future.
The open-source libraries we will use are liboqs and oqs_openssl:
As a stretch goal, the team would integrate Falcon, one of the best PQ signature candidates, in oqs_openssl. There is an open-source implementation of Falcon already in https://falcon-sign.info.
Students must have
Dell EMC follows a centralized lab approach for many of our Product Engineering teams. We have large shared labs in strategic locations across the globe where most of the equipment necessary to design and develop new products is housed. Since our Engineering teams are not co-located with their equipment, we have Operations teams in place at each of our strategic lab locations to provide assistance with asset management, maintenance, upgrades, and support. These Operations teams are part of the larger IEO (Infrastructure, Engineering, and Operations) organization within Dell EMC. The IEO team utilizes an application called ESM (Engineering Service Management) to manage assets, requests, and incidents for all of the labs we support. This application is built on top of the ServiceNow platform.
The focus of this project is to enhance and expand the browser-based mobile support we currently have in ESM for both our End User Portal as well as the Process User View where our Operations teams manage their day-to-day activities. This will require direct ServiceNow development utilizing JavaScript, AngularJS, jQuery, JSON, and many other technologies where necessary.
The project consists of the following components:
Before the project begins, the team will be granted access to one of our ESM lower level environments for full usage throughout the semester. This lower environment will be a clone of our Production environment.
Part of Dell Technologies, Dell EMC is the world's leading developer and provider of information infrastructure technology and solutions. We help organizations of every size around the world keep their most essential digital information protected, secure, and continuously available.
We help enterprises of all sizes manage their growing volumes of information—from creation to disposal—according to its changing value to the business through big data analysis tools, information lifecycle management (ILM) strategies, and data protection solutions. We combine our best-of-breed platforms, software, and services into high-value, low-risk information infrastructure solutions that help organizations maximize the value of their information assets, improve service levels, lower costs, react quickly to change, achieve compliance with regulations, protect information from loss and unauthorized access, and manage, analyze, and automate more of their overall infrastructure. These solutions integrate networked storage technologies, storage systems, analytics engines, software, and services.
Dell EMC's mission is to help organizations of all sizes get the most value from their information and their relationships with our company.
With the development of distributed ledger technology now ten years in the making, enterprises are slowly exploring the possibilities of blockchain. While most of the blockchain work is happening on innovative solutions focused on public and/or permissionless blockchains, enterprises will predominantly use private and/or permissioned blockchains.
Public vs Private Blockchain? The most important difference between a public and a private blockchain is that within a public blockchain, the actors involved in the network are not known, while in a private blockchain they are.
Private Blockchains are generally used by organizations that like to keep a shared ledger for settlement of transactions. They are owned and operated by a group of organizations, and transactions are visible only to members of the network. With the technology advancing and more organizations seeing the benefits of a shared ledger among industry partners, more enterprise blockchain solutions are being developed.
Hyperledger - Blockchain Technologies for Business. Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration, hosted by The Linux Foundation, including leaders in finance, banking, IoT, supply chain, manufacturing and technology. https://www.hyperledger.org/
For this project, the team will:
Key focus areas for a BirlaSoft Hyperledger Business Blockchain include:
Students will be asked to sign over IP to sponsor when team is formed.
Knowledge is power. News is not.
Sally is a powerful partner of a nationally relevant law firm. She heads a practice group prominent for their work in high-profile issues in food, product and chemical safety. Her day begins very early with a workout before heading into the office, stopping at 6 pm for dinner and family, and tries to get in a half-hour of reading before truly ending the day.
She does not have much time for general news channels, where less than 0.1% of articles are relevant. One negative about general news reporting is its time-backward perspective: they are reporting on the start of a court case when the attorneys have already been selected.
Time-forward knowledge is relevant. Trends, innovations, and risks in her and her clients’ spaces are very relevant. Her reading focuses on weekly professional journals and news summaries where 50% of the articles are seemingly relevant (think about equivalents to ACM’s XRDS).
To efficiently consume professional and business news, she has her approach. She discards obviously irrelevant sections and scans the remaining articles’ first paragraphs to select the ones she will read in full. Still, she finds herself skipping out of 80% of the articles.
To respect her valuable time, we introduce PowerKnowledge. PowerKnowledge takes a large feed of multiple sources of business and industry news, and adaptively selects the 3 ranked articles that Sally should read that day. It uses knowledge of her current interests, the diversity of articles presented to her within the past week, and unobtrusively captured feed-back. Her current interests could be based on industry, product category, manufacturer or specific matter.
When PowerKnowledge sells well, in the follow-up version, we will consider incorporating knowledge of behaviors from similar people. Be mindful that this problem is different from the online shopping recommendation engine. It is the dis-similarity that gives Sally’s law firm the competitive edge.
This product is currently troubled by the cold-start problem. The cold-start problem occurs when a machine learning product has zero users hence insufficient labeled training/testing data.
This project is to overcome the cold-start problem. To overcome the cold-start problem, the initial labeled training/testing data will be collected from roughly 20 to 50 test users in a more obtrusive manner over a quarter-year.
Our jump-off point is an existing simple web-application. This web-app prototyped the product concept. The required modifications are:
The data source is a LexisNexis business news feed.
The technology is Python, JavaScript and MySQL. The web-app framework is Django.
An asthmatic high school student, diagnosed with asthma in early childhood by her pediatrician, uses a control inhaler to help her manage her symptoms and a rescue inhaler when necessary. She takes the bus to school every morning and engages in extracurricular activities, including outdoor sports, each afternoon. Her symptoms may correlate with increased physical activity but the timing and severity of these symptoms are difficult for her and her family to track.
The student wears a health monitoring device that detects heart rate (R-R peaks) and environmental exposure to O3 and volatile organic compounds (VOC). The student’s mobile health application should provide feedback to the student to influence behavioral changes if there is a potential health risk.
Smart and self-powered sensing nodes can have disruptive impact in healthcare and the Internet of Things (IoT). Autonomous self-powering leads to “always on” operation that can enable vigilant and long term monitoring of multiple health and environmental parameters. When packaged in wearable, comfortable, and hassle-free platforms, these systems increase adoption by users and can be worn to gather information over long periods of time and reveal possible correlation or even causality between different sensor streams. This information can be powerful in chronic disease management such as heart disease, asthma, and diabetes.
Similarly, in IoT applications, always-on, battery free operation of smart sensing nodes can lead to low maintenance structural monitoring of buildings, cities, and infrastructure along with large scale smart agricultural or industrial monitoring applications. The NSF Center on Advanced Self-Powered Systems of Integrated Sensors and Technologies (ASSIST) is building disruptive self-powered smart sensing nodes with state-of-the-art energy harvesting technologies, high-power/high-energy density supercapacitors, ultra low-power electronics, and low power health and environmental sensors all integrated into comfortable wearable platforms that work together to achieve “always on” capability).
The ASSIST use cases focus on measuring heart rate (HR)/heart rate variability (HRV) and environmental air quality to alert users of potential health risks. An app-based user interface should support the ASSIST use cases to provide user feedback and influence behavioral changes based on the data.
ASSIST wearables use Bluetooth low energy (BLE) for data serialization and export to the app. In particular, these tasks require the development of mobile health apps, which can synthesize data from multiple sensors, process the sensor data based on known correlations from offline analysis, and provide an engaging user interface. The app should connect via BLE and allow users to dynamically visualize the HR, HRV, and environmental exposure (e.g. gas concentrations). In addition to data aggregation, the device should have on-device alerts to high HR, HRV, and exposure.
Some specifications for the application include:
The development team will use a cloud service to host the data. This decision is affected by expected access permissions, security/encryption methods, and regulatory requirements. As an option, the team will also research and outline the requirements for being HIPAA compliant in both privacy and security. The data storage implementation to be managed by the app development team should take into account data security while also making it accessible to ASSIST members and corporate affiliates with the proper access permissions.
When a business need is identified, Merck sends a request for information (RFI) to potential suppliers who may be able to provide a product or service to fulfill the business need. The potential suppliers send responses, which are reviewed by Merck. Once a supplier is selected, a contract is negotiated and signed by Merck procurement. During this time, the supplier is subjected to multiple risk assessments to continuously gauge the business relationship between the supplier and Merck. For example, a supplier could be the source of a security or privacy breach, or a supplier may otherwise not fulfill contractual expectations required for Merck’s success. Merck needs a system to capture information used to assess risk, including data such as:
The team will design and build a software system to manage a master inventory of suppliers. The supplier management system will become the ‘single source of truth’ about supplier relationships and will capture information necessary to provide governance and manage risks. For example, the system will store contracts, contact information, product or service descriptions, financial expenses, etc. Without a system to manage supplier information, governance and risk management is not holistic.
The system should have a database of suppliers to capture supplier information based on the identified requirements (examples are provided above). The system also needs to have a user-friendly front-end that provides a dashboard view of risks and suppliers with the ability for a Merck employee to generate reports. Users of the system can include both Merck internal users that manage supplier risks, as well as external suppliers who will use the system to provide information to Merck.
Wake Tech Community College, Life Sciences Department, Dr. Candice Roberts
Wake Technical Community College is the largest community college in North Carolina, with annual enrollments exceeding 70,000 students. The pre-nursing program in the Life Sciences Department runs a two-course series on Anatomy and Physiology, where this project will be used, with enrollments exceeding 800 annually. Additionally, this project is expected to assist over 1,000 biology students when fully implemented.
Biology and pre-nursing students need to understand how the body carries out and controls processes. Proteins have a diverse set of jobs inside cells of the body including enzymatic, signaling, transport and structural roles. Each specific protein in the body has a particular function and that function depends on its 3D conformation. It makes sense then, that to alter the activities within cell or body, proteins change shape to change function. As a beginning biology or pre-nursing student, this is a difficult process to imagine from a 2D image in the textbook, and we wish to create a tool that helps visualize protein dynamics. One important example of this is hemoglobin. Hemoglobin is a huge protein found inside red blood cells and its primary function is to carry oxygen and carbon dioxide to and from cells of the body, respectively. Structures inside hemoglobin bind to oxygen dynamically at the lungs and then releases the oxygen at metabolically active tissues.
Last semester, students in the senior design course created an AR app that allowed students to view the structure of hemoglobin, under various conditions. This semester we want to expand this technology from a tool that permits visualization of a single protein to a tool that’s usable in the classroom and permits instructors to guide their students’ exploration of the concepts of protein dynamics. The main functionality of the software for visualizing protein structure under different conditions exists, but making it usable for instructors and students as a learning tool requires additional design and development to enable biology instructors to populate the backend database with protein structures, tailor visualizations of those structures to the learning goals, and add additional instructional content.
Building on the framework developed during the prior senior design project, this revision must improve upon existing functionality as well as provide new functionality.
Functionality improvements:
New functionality:
Senior design students will also work with Wake Tech Community College Biology instructors to conduct usability testing at two or three sessions during the semester, which will require transportation to Wake Tech’s Perry Health Science Campus in Raleigh.
Students will be required to grant royalty-free use of IP to Sponsor when the team is formed.
We are looking for a team that wants to experience the whole life cycle of software development all while working to solve real-world business needs - AI, cutting edge front end, robust backend deployed in containers using Docker and Kubernetes.
The voice services that Bandwidth offers to its customers are world class and very useful for communication purposes. However, wouldn’t it be great to have a further understanding of what happened on a call without listening to each and every call a company makes? Think of a call center that makes thousands of calls a day. The intent of this project would be to analyze the content and sentiment of a voice call and determine its makeup.
Bandwidth would like a tool to analyze call recordings to determine their content and sentiment. The project will involve the following:
Possible stretch goals:
At Bandwidth, we power voice, messaging, and 9-1-1 solutions that transform EVERY industry.
Since 1999 our belief in what should be has shaped our business. It’s the driving force behind our product roadmaps and every big idea. It’s a past and a future built on an unrelenting belief that there is a better way. No matter what kind of infrastructure you have (or don’t have) we’ve got your back with robust communications services, powered by the Bandwidth network.
We own the APIs… AND the network. Our nationwide all-IP voice network is custom-built to support the apps and experiences that make a real difference in the way we communicate every day. Your communications app is only as strong as the communications network that backs it, and with Bandwidth you have the control and scalability to ensure that your customers get exactly what they need, even as your business grows.
The Dell/EMC senior design project will give the team a chance to develop software that improves the real-world performance of Dell/EMC backup and restore software. The team will identify optimal strategies for handling different types of backup / restore loads, then apply those strategies to new applications in order to automatically improve their performance.
Data Domain is the brand name of a line of backup products from Dell/EMC that provide fast, reliable and space-efficient online backup of files, file systems and databases ranging in size up to terabytes of data. These products provide network-based access for saving, replicating and restoring data via a variety of network protocols (CIFS, NFS, OST). Using advanced compression and data de-duplication technology, gigabytes of data can be backed up to a Data Domain server in just a few minutes and reduced in size by a factor of ten to thirty or more.
Our RTP Software Development Center develops a wide range of software for performing backups to and restoring data from Data Domain systems, including the Data Domain Boost libraries used by application software to perform complete, partial, and incremental backups and restores.
As Data Domain makes its way into even more data centers, the need to accommodate additional workloads is increased. Customers must be able to backup their data efficiently to meet constantly decreasing backup time periods.
This concept or requirement also applies to the restoring of the data/databases. Dell/EMC has developed technology to increase the efficiency and reduce the time for data to be backed up and for the data to be protected.
The focus of this project is to determine the optimum behavior of the Data Domain software for several data restore access patterns. Depending on the behavior of the application performing the data restore, we need to determine the optimum settings for several parameters that will modify the behavior of our internal software library.
Students will use predefined access methods to restore data files from a Data Domain Virtual System and, based on the time / throughput of the restore, modify one or more parameters to decrease the time of the restore, improve the throughput of the restore, or both.
These parameters (collectively called a profile) will be used by the Boost software to optimize the restore process.
We want to determine the optimum settings (or parameters) of a workflow profile for different access patterns. For the project we are defining 3 specific access patterns which can be extended to include additional patterns if time and resources permit.
These are the 3 typical access patterns we want to investigate and optimize. For each access pattern, the team will use the supplied tools and software to restore a database and, based on the results, modify one or more of the profile parameters to reduce the time of the restore.
The results for each individual test can be stored on the Data Domain using an internal database and be data-mined to help identify the best profile.
This investigation may also identify cases where some of the profile parameters have no impact at all or significant impact when adjusted slightly.
The workload characteristics on the Data Domain Restorer (DDR) can be Random or Sequential. To have Data Domain File System (DDFS) support both Sequential and Random workloads and better utilize system resources, we need better intelligence in DDFS in detecting these workloads and enabling the respective optimizations where necessary.
An Access Pattern Detection (APD) algorithm that can identify most use cases that a DDR supports or needs to support will be necessary in order to apply the right optimizations.
Data domain has a network optimization protocol for backup/restore of data called DD Boost. The APD logic can be applied to DD Boost to control read-ahead cache. If the access pattern is detected to be Sequential, Read ahead caching will be enabled/re-enabled. If the access pattern is detected to be Random, Read ahead caching will be disabled.
The advantages of implementing APD in DD Boost are:
Part of this project can be to do machine learning and evaluate data restore patterns for different applications. On the DDR, we provide a database for data warehousing. Records can be added to this database using tags (we call it MDTags). Interesting data points that will help in optimizing read performance should be added to this database and data mining/machine learning should be performed on this database and it should result in 2 outcomes.
Example of a machine learning profiles can be in context of READAHEADs, which can be applied to any Storage units/mtrees.
So if an mtree is catering to an NFS client for Oracle workloads, we can apply an “Oracle Optimal restore” profile to that mtree. If a Storage Unit/mtree has a DD Boost client workload for NBU, we can apply an “NBU Optimal restore” profile to that storage unit. Clients can set their own profiles, and we may want to provide a “golden” (default – general purpose) profile. Profiles do not have to be “Restore” specific.
The project being proposed consists of three phases: (1) determine the optimum access pattern profile for sequential restores (2) determine the optimum access pattern profile for random restores and (3) determine the optimum access pattern for the sequential / random “hops”
Using the software supplied by Dell/EMC:
Using the software supplied by Dell/EMC:
Using the software supplied by Dell/EMC:
This project provides an opportunity to attack a real-life problem covering the full engineering spectrum from requirements gathering through research, design and implementation and finally usage and analysis. This project will provide opportunities for creativity and innovation. Dell/EMC will work with the team closely to provide guidance and give customer feedback as necessary to maintain project scope and size. The project will give team members exposure to commercial software development on state-of-the-art industry backup systems.
The data generated from this engagement will allow Dell/EMC to increase the performance of the DDBoost product set and identify future architecture or design changes to the product offerings.
Dell/EMC Corporation is the world's leading developer and provider of information infrastructure technology and solutions. We help organizations of every size around the world keep their most essential digital information protected, secure, and continuously available.
We help enterprises of all sizes manage their growing volumes of information—from creation to disposal—according to its changing value to the business through big data analysis tools, information lifecycle management (ILM) strategies, and data protection solutions. We combine our best-of-breed platforms, software, and services into high-value, low-risk information infrastructure solutions that help organizations maximize the value of their information assets, improve service levels, lower costs, react quickly to change, achieve compliance with regulations, protect information from loss and unauthorized access, and manage, analyze, and automate more of their overall infrastructure. These solutions integrate networked storage technologies, storage systems, analytics engines, software, and services.
Dell/EMC’s mission is to help organizations of all sizes get the most value from their information and their relationships with our company.
The Research Triangle Park Software Design Center is an Dell/EMC software design center. We develop world-class software that is used in our VNX storage, DataDomain backup, and RSA security products.
Foresite.ai collects, analyzes and distributes real estate data from multiple different public sources in a map-based interface. Before analysis, this data is cleaned, normalized, aggregated and displayed to users. Users can augment this data and analysis with proprietary data uploaded directly from spreadsheets or other data formats (JSON, CSV, etc).
In the real estate industry, the most money is made when a firm has information that no one else has. At the same time, the process of buying, selling or renting property requires constant communication between the different stakeholders in the process. These two competing priorities lead to three characteristics of our users: 1) they are immensely protective of their proprietary data 2) it is important that they be able to modify/amend specific data points in our provided datasets or their own uploaded data and 3) they need different permissions for different types of users in order to facilitate collaboration on projects.
Our solution is a map-based interface, which displays many different geospatial datasets and lets the user quickly apply filters. It’s primarily run on the client-side, leading to multiple problems: 1) when users upload information and close the app, they must re-upload the data the next time they open the app; 2) if a user finds any errors in the Foresite-provided datasets, they must e-mail us, and we have to manually verify/correct these issues and 3) there is no way for users working on the same project or within the same firm to easily share data in the cloud. Our users are various beta customers who consist of real-estate brokers, developers, private-equity investors, and lenders.
We are exploring smarter, automated systems to streamline this tedious process. The frontend engine of Foresite.ai uses ReactJS and Javascript ES6, and the backend systems/databases use PostgreSQL and python3 scripts for handling data. A stripped down version of our codebase will be shared as needed for the completion of the project.
We envision a web app with approximately ~65% frontend and ~35% backend work to be done. The login pages, React GUI components (to be added to the existing app), as well as all of the authentication backend would need to be created from scratch. The authentication backend system will have the requirements that it is either based in python3 (or nodeJS) and uses PostgreSQL as its persistent database. This framework should have all the following features:
During weekly sponsor meetings, we will review new developments in the app and provide feedback, clarification and training as needed.
Students will be required to sign over IP to Sponsor when team is formed.
Fujitsu America is one of the top three suppliers of retail systems and services worldwide. Using Microsoft’s .NET development platform, these systems offer a high performance yet open platform that diverse retailers are able to customize.
Large software projects (more than) occasionally require refactoring. If the project itself is a Software Development Kit (SDK), the refactoring can affect the solutions dependent on a prior version of the SDK. The work to adopt a new SDK can often require human effort and can be be tedious and error prone. If this process could be at least partially automated, there could be significant programmer productivity improvement, as well as speedier adoption of the new SDK.
Fujitsu is confronted with one such migration caused by the ‘relocation’ of hundreds of classes to different namespaces and changes to method signatures between two versions of their SDK. Such a migration is likely in the future as well, so automation processes are of great interest. While the SDK in question is written in C#, the ‘consumers’ of the SDK are both C# and VB.NET. The complexity of this transformation largely rules out simple, text editor automation because the migration is likely to affect both the consumer source code and the project structures that build that source code.
A key enabler of automation for this project is the Roslyn compiler published as open source by Microsoft® with Visual Studio 2017™. This compiler allows programmatic access to source files in the same navigable expression trees used by the compiler itself. Modification of the source can then be done reliably within that context, avoiding potential scope or name collision problems that a naive, text-based solution might encounter.
In this project, at least the following steps are required:
The fundamental goal of this project is to be able to run an automated transformation of client code and to produce a compilable version of the sample consumers using the new SDK. Although the team won’t have direct access to client code produced by Fujitsu customers, as a stretch goal, Fujitsu will work with the team to run the transformation against one or more actual customer projects.
In Spring, 2017, an NCSU Senior design team made some progress toward these goals. This is the starting point for the Spring 2019 project.
The senior design team will be working with Visual Studio 2017, both for the project implementation and for the codebase being transformed.
Block-based programming languages, such as Snap! and Scratch, are used in introductory Computer Science (CS) and other STEM courses integrating computational thinking. As more classrooms teach and incorporate CS, there is a greater number of teachers without CS experience who need resources to help teach and assess student performance and competencies.
In order to support these teachers, we are developing a web-based grading platform, classroom dashboard, and teacher-student portal called Gradesnap!, similar to Gradescope. Within Gradesnap!, teachers will be able to do essential classroom tasks, such as:
Gradesnap! will offer a simpler interface to students. They’ll be able to:
Gradesnap! will offer a web interface to students and teachers. The team can expect to use the following, although there are opportunities to add or replace technologies, based on the team’s experience or expertise.
Game2Learn is advised by Dr. Tiffany Barnes, located on the 5th floor of Venture 2. The primary graduate student working on this project is Alex(andra) Milliken, a 4th year PhD student, who has worked with teachers using Snap! in an Introduction-to-CS class for the past 3 years.
If we were to pick two areas of interest that are making all the right moves in the software innovation spectrum, it would have to be Smart Glasses and Machine Learning. With seamless possibilities and scope for use cases ranging across industries such as Medical, Retail, Service, and Tourism, the time is ripe for development of the art of the possible. Smart Glasses have a unique proposition of being able to consume the field of view electronically while having the capability to overlaying the view with Augmented Reality (AR). Machine learning along with Artificial Intelligence can bring in the unique perspective of contextualizing the data beyond the human mind’s computational power. A combination of these two technologies has the potential to revolutionize the status quo across any process.
The team will first need to establish a development environment for smart glasses, such as Vuzix M300, and they’ll need to evaluate and set up other necessary SDKs to build applications leveraging native features of smart glasses.
The team will develop an application that features Optical Character Recognition (OCR) and Video Stream analytical capabilities for a quality assurance use case. The application will use Wikitude, an industry-leading AR SDK for object recognition, image recognition and 3D augmentation. It will also use state-of-the-art Amazon Web Services APIs for OCR and Video analysis. Some aspects of the target application are negotiable, based on the team’s experience and interests. A modular architecture will make the application both scalable and easy to extend or adapt to other use cases.
An end result of this project will have a smart glass development template ready with connectors to cloud service providers such as AWS. These templates can act as accelerators to take on real-world use cases. The use cases can be extended to make them device agnostic and be extensible for any industry. The use cases will be made applicable to Industry 4.0, Smart Campus, Smart Cities kind of initiatives.
The student team will gain a working knowledge of the intricacies behind Smart Glasses, AR/VR and Machine Learning algorithms, along with an understanding of cloud-based Software- and Platform-as-a-Service technologies. The team will understand how to bridge together various seemingly standalone solutions into something more tangible. The team will also get experience with Code Versioning, Quality Control, Best Practices, application packaging, packing and deployment on local and cloud servers.
Students may be required to sign over IP to Sponsor when team is formed.
The Laboratory for Analytic Sciences' senior design project is focused on load optimization on cloud computing systems. Here, we use "load" to mean the total demand on a resource from all running analytics. Students will be given:
Students will design and implement an algorithm which determines an analytic scheduling strategy that results in resource loads that best fit the system owner's desired resource loads throughout the scheduling period. An initial possible approach to determining a scheduling strategy may be offered, though the students will be encouraged to either design a strategy of their own or to improve upon the provided approach. Testing and evaluation of initial and subsequent strategies will be required to gauge improvement over random scheduling.
Students will get hands-on experience running Apache Pig map-reduce jobs on a cloud computing system and applying mathematical optimization methods. A successful result will hopefully offer a new approach to cloud compute scheduling which produces a more stable and user-friendly cloud. In addition to the above mentioned resources, weekly consulting will be provided. Students may implement their algorithm in a language of their choice, though Python is preferred if convenient.
service project
The NC Collaborative for Children, Youth & Families (“The Collaborative”) is a non-profit group of cross-system agencies, families, and youth who educate communities about children’s issues, support the recruitment of family and youth leaders to provide input into policy and training, and organize support to local systems of care groups.
The Collaborative has been working to improve their website and use of social media to promote understanding of children’s issues. They need assistance with developing online training and ways to deliver their message to communities about children’s needs.
The Collaborative’s Helping Hands Training Portal is an online system that allows them to create a variety of informative and educational trainings about their organization and other topics of interest. The purpose of the system is to provide a custom venue for creating and distributing training content and assessments pursuant to the Collaborative’s mission and the strategic goal to “increase awareness and understanding of System of Care impact … and provide educational programs to enhance Systems of Care.”
The portal allows the Collaborative to create custom online educational courses (or “modules”) composed of rich text and embedded multimedia. Courses may also contain quizzes to assess the learner’s understanding of the content. Collaborative staff are able to track the learning progress of trainees, and individual users are able to review their own learning history.
The Collaborative would like to add more features to the training portal. They would like some sort of open-ended feedback on how participants use their trainings (perhaps as a forum of sharing of best practices, some sort of chat room, etc.). Eventually, the Collaborative would like to offer a pilot study to see if these additions improve the quality of service in the field.
They would also like for trainees to be able to share their learning progress or otherwise verify training status with potential employers or other third parties, such as to prove achievement of some certification (requirements to be determined!).
The portal started as a service project in the Senior Design Center over the 2017-18 academic year, and has since undergone further development to complete core features.
The training portal is built as a Node.js web application using TypeScript. TypeORM with MySQL is used for persistence. The UI is server-side rendered using Nunjucks templating. Interactive UI elements, such as quizzes, are built using Survey.js and custom browser-side JavaScript.
Students are encouraged to apply to this project if they have an interest in the technologies used, a desire to explore requirements and implement new features, and a passion for helping others!
Students are asked to release their work and contributions to the project under a compatible open-source license.
Triangle Strategy Group (TSG) is a technology consulting firm based in Raleigh, NC serving clients in the cosmetics, pharmaceuticals, food and beverage industries. We design Internet of Things (IoT) systems to create exciting new products and experiences for our clients and their customers.
Remynda is a new cosmetics organizer that helps consumers learn and remember when and how to use their skincare products. Skincare users often find it difficult to adhere to complex skincare regimens, which may require several skincare items at different times of the day and week. As a result, many new users never experience the full benefits of their purchases. Skincare users increasingly receive online coaching and product recommendations from a beauty consultant.
The Remynda organizer uses LEDs to remind a user when to use her products and interacts with her mobile device to provide mobile reminders, tracking reports, online coaching and product replenishment.
In addition to LEDs, the organizer includes a microcontroller, sensors, WiFi and NFC connectivity. The organizer will typically operate under battery power for 3-6 months before requiring charging.
The goal for this project is to develop software for a scalable network of Remynda organizers, interacting with a community of users and consultants.
This will include programming the Remynda hardware, establishing and maintaining connectivity, providing mobile notifications, creating user interfaces for both users and consultants, conducting data analytics and some simple machine learning.
For mobile app
The sponsor is flexible on the mobile app modality (web app / native). Project documentation will need to emphasize steps for building and extending the application. The Remynda device will be controlled by Arduino and programmed via the Arduino IDE using C++. The team will be able to choose an appropriate database technology (with an interest in keeping costs down).
Each team member will be asked to sign an NCSU student participation agreement at the start of the project.
The team should consider the following stakeholders:
There is a video demo of prototype hardware at:
https://patrickjcampbell2001.vids.io/videos/d49ddfb51f19e5c15c/181221-remind-a-mp4
Students will be required to sign over IP to sponsor when team is formed.
2025 | Spring | ||
2024 | Spring | Fall | |
2023 | Spring | Fall | |
2022 | Spring | Fall | |
2021 | Spring | Fall | |
2020 | Spring | Fall | |
2019 | Spring | Fall | |
2018 | Spring | Fall | |
2017 | Spring | Fall | |
2016 | Spring | Fall | |
2015 | Spring | Fall | |
2014 | Spring | Fall | |
2013 | Spring | Fall | |
2012 | Spring | Fall | |
2011 | Spring | Fall | |
2010 | Spring | Fall | |
2009 | Spring | Fall | |
2008 | Spring | Fall | |
2007 | Spring | Fall | Summer |
2006 | Spring | Fall | |
2005 | Spring | Fall | |
2004 | Spring | Fall | Summer |
2003 | Spring | Fall | |
2002 | Spring | Fall | |
2001 | Spring | Fall |