Computer Science

Senior Design Center

Projects – Fall 2017

Click on a project to read its description.

Cisco – Validated Lightweight Crypto for IoT Environments

Background

Cryptography is very important in today’s world. Improper or maliciously altered crypto implementations have been a big concern for the industry in recent years. To alleviate the risk, Cisco has been working with the National Institute of Standards and Technology (NIST) on finding ways to validate crypto implementations. The output of these efforts is the Automated Cryptographic Validation Protocol (ACVP). ACVP enables crypto implementations to interact with a server that provides crypto test vectors which the crypto implementation encrypts and sends back. The server can then check for correctness which would mean that the algorithms are implemented correctly. ACVP can be used for FIPS validations of the cryptographic modules. Cisco has open-sourced an ACVP client that implementers can use to validate and certify their algorithm implementations against NIST’s or 3rd party servers.

On the other hand, IoT environments cannot always use all the commonly accepted crypto algorithms available because of their constrained nature. A battery-operated sensor, for example, cannot use 3072-bit RSA because it would deplete its battery faster and because of processing load. NIST’s Lightweight Crypto Program is working on defining lightweight crypto algorithms suitable for these constrained endpoints. Some of the lightweight algorithms are documented in their Report on Lightweight Cryptography. Additionally, a recent paper from NIST’s 2016 LWC Workshop describes a methodology of using Joules/byte as a metric for evaluating the algorithms’ energy efficiency.

Description

In this project we want to introduce ACVP in Lightweight Crypto for constrained environments. ACVP can be used to validate lightweight crypto implementations and provide energy efficiency metrics for these modules that are important for constrained environments.

  1. We want to extend the open-source ACVP client library to integrate and validate lightweight crypto libraries like WolfSSL or mbed TLS. The updated ACVP client should be able to run the crypto algorithm implementations from these libraries against the ACVP server test vectors to validate the crypto modules. Running the new ACVP client code in a Raspberry PI is encouraged, but not mandatory.

  2. After integrating the new lightweight crypto library in ACVP, we want to look into the necessary information that the ACVP client and server can exchange in order to evaluate a lightweight crypto algorithm in terms of energy efficiency (En=P×T/n) in Joules per byte as explained in a recent paper. We also need to understand how this information will be exchanged in ACVP messages.

  3. Finally, if time permits, we want to look into the algorithms described in the Report on Lightweight Cryptography and investigate the necessary changes to be made in the ACVP library to add these algorithms.

If time permits, we would aggregate the output of this in a paper that will be submitted to a future NIST LWC Workshop.

Required Student Skills

  • Students must have taken or be taking taken at least one course in security or cryptography.

  • Experience in C programming and familiarity with JSON.

  • Familiarity with Git and GitHub.

  • Motivation to work on an interesting topic and try to make an impact.

Motivation

Students will be able to work on a new and interesting practical industry problem. Cryptography and security are hot topics today. NIST, Cisco and other companies have been trying to address the validated crypto issue for some time. IoT Security is also one of the hottest topics in the industry today. Marrying the two subjects and planting the seed for standardized, validated lightweight crypto will allow the students to get familiarity with interesting topics and see where the industry is moving towards. They will also get to use common tools like Git and see how cryptography is implemented in the real-world.

Since Cisco has been actively working on ACVP and IoT security, this project will allow us to make the case for the industry to consider validating lightweight crypto in an automated fashion. With this project we will be able to prove that it is possible by integrating the ACVP library with at least one lightweight crypto library. We could also gain knowledge about potential changes to the ACVP protocol that enable its use for crypto validation in constrained environments.

Deliverables

  1. A new fork of the open-source ACVP library should be used to do a git pull request that includes the necessary code changes to integrate the open-source ACVP client library with a light crypto library like WolfSSL or mdebTLS. A new, working app_main.c application should be included in this code. Running the new ACVP client code in a Raspberry PI is encouraged, but not mandatory.

  2. A well-documented set of changes to the ACVP protocol (client and server) to be able to support energy consumption evaluations of the algorithms tested based on a recent paper (En=P×T/n).

  3. If time permits, a new pull request should be submitted to include the ACVP library changes required to add the lightweight crypto algorithms in the Report on Lightweight Cryptography in ACVP. Implementation for these algorithms in not necessary, just adding them to the ACVP supported algorithms.

  4. If time permits, we would like to summarize the output of all this work and submit it in a future NIST LWC Workshop. The document should be in LaTeX.

Fidelity Investments I – “Focused Frog”

Enterprise Cybersecurity (ECS) is a business unit within Fidelity Investments that sets Fidelity’s strategy, policy, and standards for the security of and operations in cyberspace. We focus on threat reduction, vulnerability reduction, deterrence, international engagement, incident response, resiliency, and recovery policies and activities. This project will be exploring a version of chaos engineering technologies (nicknamed the “focused frog”) that will help improve the resiliency of our distributed applications. Our millions of users depend on our applications, ranging from trading platforms to retirement planning, to be resilient and available during IT disruptions in order to deliver best in class service to our customers.

Solution Description

Enterprise Cybersecurity IT Business Resiliency would like to develop a “focused” method of chaos engineering utilizing a derivative of the open source Chaos Monkey code [L1] that can run on premise and against Fidelity’s traditional infrastructure (onsite VM’s, traditional servers, appliances, network gear etc.), Fidelity’s internal cloud, and eventually scalable to Amazon Web Services and other Cloud Service Providers. This derivative of the Chaos Monkey code would be “focused” at first, so the code would target a specific application, server, slice etc. in a controlled environment[L2] before expanding this to multiple applications, servers, etc. A phased approach is outlined below:

  • Phase 1: Start at the command line level, utilizing chaos monkey code to target individual pieces of infrastructure, virtual machines, applications, servers.

  • Phase 2: Creation of a user interface, integrated with Service Now (to pull server and incident groups) and our monitoring systems (to capture alert streams) that can be used to run the chaos monkey code against individual applications, virtual machines, slices, etc.

  • Phase 3: Ability to run chaos monkey code, using the previously mentioned UI against groups of applications, servers, slices, etc.

The phased approach has been chosen for two reasons:

  • To allow for Continuous Integration and Devlopment of the application, as well as letting the students to see their application work without a user interface.

  • Fidelity is a large, risk-averse enterprise. By being able to focus on targeted applications we already know are resilient, there is less of a risk that we disrupt the overall operating environment.

Suggested Technologies

Javascript, NodeJS, SQL, Open source tools, Github, Apache, Linux, Windows, VMware, MariahDB

Additional Notes

Our team will assist with the data the students require. We will not provide directive access to SNOW and interal data. We will provide data that has been “masked”(ie: no real fidelity data) by taking an export file and changing around the data to conform to fidelity internal policies before allowing it’s use.

References

Fidelity Investments II – Guardian Tool

Access management is critical to the protection of customers and corporate security at Fidelity. Fidelity has a set of controls to ensure that users have access appropriate to their individually assigned job roles. Many times this results in managers reviewing large amounts of access for their direct reports because much of the access may be categorized as non-compliant. We would like to find a way to engage and incentivize managers for helping to reduce non-compliant access and completing reviews in a timely manner.

Solution Description

We would like to create an application or extend our existing application to present achievements and to track the achievements a manager has unlocked. These achievements will revolve around rewarding managers who have minimal non-compliant access and/or complete their reviews in a timely manner. A possible structure for this is outlined below:

  • Achievement: Access Guardian – Tier 1

    • This achievement would be unlocked for those managers who have direct reports with 0 items of non-compliant access
  • Achievement: Access Guardian – Tier 2

    • This achievement would be unlocked for those managers who have direct reports with 1 - 10 items of non-compliant access
  • Achievement: Access Guardian – Tier 3

    • This achievement would be unlocked for those managers who have direct reports with 11 – 20 items of non-compliant access
  • Achievement: Access First Responder

    • This achievement would be unlocked if a manager completed their review within 5 days of the first notification.

Additional achievements could be given for engaging individual associates to provide input for the reviews. These managers could unlock the “Access Guardian – Collaboration” award.

Scoring is based on specified direct reports (i.e. a manager has 5 direct reports) and the number of items in their annual access review that are marked as “non-compliant”. Data structures will be provided for the initial integration and scoring. All relevant data to perform calculations (scoring) will be available in the data structure (which will also include compliance value).

This implementation needs to be a separate module that can be linked inside of our current application (integration points could be defined later). This link could be as simple as a hyperlink, or as complicated as extending the “review/manager” data structures.

Time Permitting

  • Reports to view current achievements and those that could be unlocked in the future

  • An alerting system and/or leaderboard

Technologies

AngularJS, C#.Net, ASP.Net WebAPI

IBM – Open Projects – Cognitive Telescope Network

Background

Telescopic follow-up of transient astronomical events is one of the most desirable and scientifically useful activities in modern observational astronomy. From a scientific perspective, pinpointing a transient event can be essential for discovering more about the source, either by directing more powerful telescopes to observe, or to maintain a near continuous record of observations as the transient evolves. Very often transients are poorly localized on the sky and telescopes have a limited field of view – thus pin-pointing the transient source is often a daunting task. Perhaps most importantly, the number of large telescopes on the planet is small, and they cannot be commandeered to randomly search for every transient of interest.

Modern sub-meter class telescopes, of the sort often owned by universities and increasingly by amateurs, if properly directed, could play an important role in enabling transient follow-up. Modern technology gives them the ability to be automated and controlled remotely and to make useful imaging observations that will enable follow-up work by other, larger telescopes. The Cognitive Telescope Network (CTN) will be a framework that takes notifications of transient events and intelligently instructs a network of sub-meter telescopes mapped into a grid and observe a large region of the sky that likely contains the transient event, based on the geolocation, weather and properties of the individual telescopes. The goal of CTN is to collect the data from this network of small telescopes, evaluate and classify that data to identify the most likely candidates for the transient being hunted and deliver the results to the astronomer community for further analysis by larger telescopes for directed and focused observations.

Astronomical events sent to the CTN may trigger the telescopes in the network to be directed towards the event to capture images that may be analyzed later. The framework to communicate with telescopes world-wide and the ability to parse and capture events is provided. The overall goal is to build an ecosystem of subscribed members (astronomers and interested individuals) and disseminate knowledge in the community.

Project Scope

The CTN will be communicating with all the components (see Community link, below) through a centralized REST API written in Java and deployed to Bluemix as a Cloud Foundry or Microservices application. The scope of this project is limited to the development of the REST API. The student team will be interacting with the SQL Database from the API as well as looking into caching mechanisms.

There is no interaction with telescopes in this effort. The API will be front-ended with IBM API Connect service. OAuth authentication will be set up through this service. Session Caching will be handled through the Cloudant database service and backend RDBMS will be either DB2 or MySQL. This development effort will be part of the Foundation component of CTN.

Please refer to the diagram posted to the Community site. All communication between the CTN components takes place through REST API. It also forms an abstraction layer on top of the database. API Connect is an IBM product available to manage the APIs. It is available in both on-prem and cloud versions. We will be using the cloud version on Bluemix.

This is a completely new development that we are starting this Fall. There has been some groundwork being laid down by a summer student from University of Illinois UC. He will also be involved with the project in the Fall. ECE will be taking on 3 critical components for the project - Observing Director, Image Analyzer and Telescope Commander under the Cognitive Epic. Some components will be developed under IoT as well. The Telescope Commander is the component that interacts with the telescope being developed under ECE. This will be led by Dr. Rachana Gupta. CSC Students will be more than welcome to participate in the discussions and contribute any ideas since they will be part of the CTN asset team. But the responsibility of the development on the component remains with the team that chooses to develop the Telescope Commander Open Project.

All the components developed will be working components primarily on Bluemix. We will be using both the Cloud Foundry layer as well as Microservices (using Docker containers) for most of the work we do on the project. All the code will be hosted by the Bluemix GIT repository and the plan is to build a DevOps pipeline using the toolchain on Bluemix.

CTN is a project that was started by Dr. Shane Larson from Northwestern University and Arunava (Ron) Majumbar. We of course had help from a number of IBMers. So in essence this is a research collaboration between IBM, Northwestern and now NC State.

Long Term Goals

It is a long term goal to get astronomers together under a common forum so the we can collaborate on this and other projects together. The community at the moment only has 6 members including Drs. Majumbar and Larson. CTN will be one specific asset with its own community mainly focussed on the project components and development. The Astronomy community on the other had will have generic information including posting pictures etc. Event Publisher component will automate this.

The plan is also to build an interactive app for the end user communicating with the Watson Conversation, etc. services through the Personal Astronomer components. No plans to start this in Fall unless some University project wants to pick it up. Potential for building this as part of the Design program from Austin labs.

Image Analyzer component planned on being developed by ECE will be dealing with Watson Visual Recognition and training Watson on the astronomical images. We will also look into improving algorithms etc.

Please join: https://www.ibm.com/developerworks/community/groups/community/astronomy

Infusion – Crowd-Sourced Infrastructure Review

Business Problem

With over four million miles of road in the United States, keeping all of it running smoothly is a difficult job. Finding all of the infrastructure deficiencies and then prioritizing them is a herculean task that government officials wrangle with every day. While experts need to make the final call, we would like to give them another stream of data to help with the job. We also want to improve citizen participation and interaction in local and state government by giving them a voice into an important part of their lives that the local and state government control.

Solution Description

We would like to create a mobile-friendly web application that allows users to report infrastructure deficiencies, and to verify issues reported by others, with the goal of providing a source of information to offices responsible for infrastructure, as well as to citizens who may be traveling or moving to a new area. The application will use mapping services to gather and display user-reported issues, as well as issues that are already known by government officials. Users can report issues in specific locations by placing a pin on a map in the application and choosing from a list of common issues and / or filling in a brief description (dangerous intersection design, bridge damage, road needs re-surfacing, need a sidewalk). Users can also respond to existing issues, raising or lowering their overall significance and adding their own details. Users should also be able to navigate a map of reported issues (either by scrolling and zooming, or by entering in a location) and tap on issues to see full description as well as the responses of other users. On this map, the significance of each issue is visually represented in some way (for example, perhaps pins on the map are more transparent the less user responses they have).

Technology Constraints

  • Web application that works smoothly on mobile and desktops

  • Google Maps API integration

  • Hosted database solution to store issue information (flexible on which specific DB)

  • Use an API to handle communication between web app and database

  • Pull in data from local governments about current, known issues & planned solutions such as from the ArcGIS Comprehensive Transportation Project Map.

Potential Stretch Goals

  • Allow users to upload pictures along with the report

  • Implement various sorting and filtering methods

  • Create a web dashboard to allow infrastructure services to make official comments on issues

  • Gamify the submission and ranking process

  • Register the project & data with a local Code for America chapter

Merck – NC Collaborative for Children, Youth & Families

(Service Project)

Merck, a Super ePartner of the CSC Department, is sponsoring a service project for the NC Collaborative for Children, Youth & Families, a non-profit group of cross-system agencies, families, and youth who educate communities about children’s issues; support the recruitment of family and youth leaders to provide input into policy and training; and organize support to local systems of care groups.

Project Description

We have been working to improve our website and use of social media to promote understanding of children’s issues. We need assistance with developing on-line training and ways to deliver our message to communities about children’s needs.

https://www.nccollaborative.org/child-and-family-teams/

Social workers throughout the State come to this website to become trained and certified to be able to serve as a mental health consultant with children and families (they can also go to in-person workshops to be trained if they prefer). Employers pay for the training and when training is completed, a box is checked off and the trainer is certified. There is no assessment or monitoring involved. The Collaborative would like to build more into their website that allows for assessment of person who is trained (via something like a quiz) and then some sort of monitoring/assessment of how participants use their training (this could be in the form of sharing of best practices, some sort of chat room, etc.). If the website includes this, the Collaborative would like to offer a pilot study to see if these additions improve the quality of service in the field.

The Collaborative would also like to offer additional training (other than the major one described above) via their website, with assessment, and perhaps a dashboard of some sort for participants to keep track of training hours and assessments (including the major one, above). Perhaps potential employers could also come to this site to view dashboards of potential employees or employees could point to dashboards from their resumes (requirements to be determined!).

The current website is built using WordPress.

NetApp – Deployment Usage Collection (DUC) Service

Problem Statement

Most data collection services are focused on analyzing and reporting on diagnostics, performance, application stability, network analysis, etc. Existing systems (DataDog, Prometheus, Nagios) can be complex, expensive, or both. Even at NetApp, we can attest to the complexities around collecting time-series data (which we call ASUP for AutoSupport) and trying to extract value out of NetApp’s ASUP data lake is a massive challenge. Even with our advancements around using better visualization tools (like Tableau), answering questions about NetApp’s install base is not self-service, using the tools are non-intuitive, and querying the massive amounts of data is unwieldy (unless you are an expert).

So what happens when you want to collect something basic, like “Are my customers using my feature?” (where a “customer” is one who bought my product). These robust data collection services are generally too complex for collecting simple usage data.

We (as in, technology communities) need a new way to collect “usage” statistics; things like “What features are our customers using? What features are they not using? How often are they using it?” Answers to these questions greatly benefit Product teams (Product Managers, Product Owners, Project Team Leads, Engineering Managers, etc)… answers to these questions can justify a roadmap, can influence where to invest resources, can boost confidence in using features for other customers, and can lead to a much more knowledgeable understanding of your product.

Project Goals

We would like a student team to build a new platform for collecting “usage” data of complex deployments; one that is simple, open, and intuitive; a system that is readily available to collect data from any internet-connected deployment, system, and/or device; a system that is scalable, reliable, and secure.

The major components of the Deployment Usage Collection platform is as follows (see architecture diagram below):

  1. The DUC Web Service: receives and stores usage data. This is a cloud-hosted service with front-end API and content repository for storing time-series data.

  2. The DUC Dashboard: displays collected usage data.

  3. The DUC Clients: one or more clients that collect data within the deployment / system / device and reports that data to the DUC Web Service.

System Architecture Diagram

System Architecture Diagram

Use Cases

The Architecture Diagram shown above conveys one instance of what a DUC platform would look like. But there are a variety of “use cases” that DUC could be applied to. Two use cases immediately come to mind:

  1. OpenStack – in open-source ecosystems, tracking usage of features is limited to vendor-specific collection. DUC will enable usage collection at a broader level. It will be useful for open-source projects where community leaders need to build roadmaps, focus resources (or not) on important features, and/or inform on feature deprecation.

  2. Containers – another open-source ecosystem where no specific vendor is leading the community, and deployment metrics are few and far between that can justify investing in one solution versus another. An open usage collection system that allows any component of a containers or container orchestrator deployment will be useful for the community to make decisions.

Taking the OpenStack use case as an example, here is a similar architecture diagram of the various components and their relationship to one another:

OpenStack Architecture Diagram

OpenStack Architecture Diagram

This “OpenStack Architecture” example shows specifically how each component will function for the OpenStack use case. (Note: For the prototype built by NetApp, we used MongoDB for the object storage, and Metabase for the Dashboard, but these are just examples of the technology we used). You can also see how the DUC Client collects metrics from a variety of OpenStack services and bundles them for delivery to the Web Service.

With these use cases in mind, it would be easy to build in API calls that would notify the collection service which features are being used, how they are being used, and how often they are being used.

Other Considerations

“Usage collection” and “reporting metrics” are just two aspects of this project. Another very important topic to consider is the user’s ability to trust what data is being sent into the DUC platform (where a “user” is someone who deploys/administers the deployment/system). Most users are sensitive to sending data over the internet, and corporations are even more paranoid. Each use case addressed during this project should consider the best, most open, method for conveying what data is being collected. For example, in OpenStack, a Horizon plugin could be developed that displays the data bundle very clearly to the user.

There are other aspects that can be discussed/decided throughout the project, such as Authentication, Encryption, how to prevent vendors from artificially boosting their metrics (with simulated or spoofed deployments), and probably more.

Deliverables

  • The DUC Web Service – A deployed, cloud-provided service that ingests data bundles via REST API and stores data in a time-series data model

  • The DUC Dashboard – A visualization tool/dashboard for analyzing and reporting metrics from the content repository (prototype uses Metabase)

  • Two DUC Clients – Working prototypes for the following use cases:

    • an OpenStack Service

    • the NetApp Docker Volume Plugin (which is a NetApp plugin used in a Containers deployment)

  • A visualization tool for the admin/deployer of an individual system/deployment for each use case that inspects and displays the data bundle (for example, in the OpenStack use case, the visualization tool would be Horizon, but could also be a CLI tool to dump text).

  • Deploy StorageGRID WebScale as the back-end cloud-enabled object storage

Note: A proof-of-concept exists, and works end-to-end. It was hacked together in 2 days by a team of 4 developers during a NetApp hack-a-thon. More details and sample code can be provided.

Project Archives

2017 Spring Fall
2016 Spring Fall
2015 Spring Fall
2014 Spring Fall
2013 Spring Fall
2012 Spring Fall
2011 Spring Fall
2010 Spring Fall
2009 Spring Fall
2008 Spring Fall
2007 Spring Fall Summer
2006 Spring Fall
2005 Spring Fall
2004 Spring Fall Summer
2003 Spring Fall
2002 Spring Fall
2001 Spring Fall