- About Us
- For Students
Click on a project to read its description.
Number Porting Authentication Service
When you change your phone provider, and take your phone number with you, a complex and error prone process takes place to update the Public Switched Telephony Network (the PSTN) so that future calls will route to you via your new provider. This process is called, number porting.
One of the common hang-ups of number porting involves validating that you truly own the phone number that you are attempting to move to your new provider. Think about it... somebody walked into an AT&T store and expressed a desire to move your phone number from Verizon to AT&T. How does Verizon know that it was really you? More often than you'd think, somebody is actually attempting to steal your number.
The phone industry has adopted a set practice for authenticating a number port, but they are all cumbersome. Strangely, none of them involves the very obvious solution of having your old provider ask you whether you indeed initiated this process.
For this project, you will implement this alternative approach for Bandwidth. You will build a web service that will send a text message to a phone number when it is requested to be ported away from Bandwidth. The recipient of the text message will simply reply "yes" in order to authenticate that the true owner of that phone number (or at least the person currently possessing the phone) has requested that the number be ported. When you receive the reply, you will capture the details of the exchange in a serialized format and digitally sign it before delivering it back to the system that initiated the process.
As time permits, you will expand the application to support the same flow with traditional voice phone calls. If a text message is undesirable, the application will simply call the phone number and play a recording that says, "press 1 if you want to move your phone number." Once that piece is working, we will consider adding multi-tenancy that would allow this service to be valuable to more telephone companies than just Bandwidth.
You will also work closely with Bandwidth engineers. We will collaborate utilizing industry-leading agile processes, giving you a real-world experience that goes well beyond the code that you're writing. Bandwidth headquarters are located on Centennial Campus, making it easy for us to meet weekly to iterate on designs and code and to ensure we're building a great product. In the process, we're sure to have a lot of fun together.
Bandwidth is an innovative provider of next generation communication services that are helping change the way that people across the world communicate. We're the Voice Over IP platform behind services like Skype and Google Voice. We also built and operate the Republic Wireless mobile phone service, which is disrupting the mobile phone industry with its Wi-Fi-first approach to mobile communications. We're a software company that is revolutionizing the telecommunications industry by powering the companies who change the way we communicate.
Data Visualization of Finance Systems Integration
We have a large number of systems/applications within Boeing Business Systems. This results in a significant number of interfaces between the systems. Understanding which systems are interacting and what data is being passed is very important.
This project is to evaluate options and develop a data visualization of the interfaces between systems. This should be a web application and the students would be expected to choose the technology stack (with input and guidance from Boeing). Students should feel free to leverage any open source tools or code.
The goal is to provide a visualization that makes the presentation simple and informative. The view should provide information such as:
The data visualization should address the items above and the students will decide how best to display the information and if it is a single view or multiple views. The team (students and Boeing together) will evaluate the options to decide on the most useful display.
Students will be provided the following information:
The main focus of the project is the visualization of the data. But there are supporting pieces that will be required (such as the storing of the data and the ability to update/add data). Students will be expected to:
Students will be required to sign over IP to Boeing when team is formed.
Web-based Flow Chart Editor
Flow Chart is a Web app that allows its users to visually create flow charts by selecting various components from a toolbox to create workflows. Primarily targeted for lawyers, the workflow app allows them to automate certain routine tasks so that they can focus more on other important tasks.
The toolbox of a flow chart editor should have components for:
The idea of the toolbox is to let a user select various controls to create a flow chart based on his/her needs. The project is not about executing the flow chart but collecting all the data needed to kick off the execution. Behind the scenes the data will get stored in JSON format so that it's easier for another system to consume and process that data.
A good example of a flow chart is when a lawyer creates a specific kind of document for a case. Based on that document creation, the lawyer would like to automate the subsequent tasks that involve sending an email to the paralegal to review the document, waiting for the review for a few days, and then sending another reminder email to the paralegal. If the court date is nearing, then send another high profile email to remind the paralegal for review. The task is complete when the paralegal has reviewed the document. Using the flow chart editor, the lawyer should be able to automate the entire task which can be pictorially represented as below.
More such flow charts can be created for calendaring rules, Intake forms, auto-communications with clients etc.
Students will be required to sign over IP to LexisNexis when team is formed.
OpenUI5 Calendar Component
The goal of this project is to build an OpenUI5-based calendar component with robust APIs to be used in Enterprise Solutions. There are currently no OpenUI5-based calendar views which show scheduled events and their details. An OpenUI5-based view is needed to seamlessly integrate with the other OpenUI5 components we use in our custom applications.
The architecture of the component must be one that considers re-use at the very core from both the component developer's perspective (making sure any similar functionality across the different views is packaged such that all views can reuse it) and the consuming app's point of view (thinking of how a consumer would want to customize the component as well as manipulate it programmatically). Development should follow OpenUI5 guidelines at https://github.com/SAP/openui5 so that the code could be contributed back to OpenUI5 or made available as an Open Source library. The design should be reviewed with SAS UX periodically to ensure it fits the interaction paradigms and overall layout of SAS applications so that it can be integrated seamlessly into any application.
The Calendar Component will support four main views: Daily, Monthly, Weekly, and Gantt (see Figures 1-5, below). The Daily and Monthly views have priority, but also implementing the Weekly and Gantt views is preferred. This should be taken into consideration during design of the API and models (extensibility).
Additional considerations for the Gantt View
The team assigned to this project will receive a document with more detailed requirements and specific constraints for the system in general and for each of the views before the initial sponsor meeting.
Figure 1. Daily View
Figure 2. Weekly View
Figure 3. Monthly View
Figure 4. Gantt View
Figure 5. Add Event
Intelligent Virtual Assistants (IVAs) mine unstructured data to derive insights that allow them to answer human language queries. However, many of the more important queries cannot be fully answered without the inclusion of structured data. So this project will look at how best to include structured relational data (such as is commonly stored in relational databases) into the question and answer functionality of an Intelligent Virtual Assistant.
The solution will have the following characteristics:
Key Takeaways Expected/Questions to answer
Think of these as challenges you could take on. Teams are invited to take on one or more of these.
Students will be required to sign over IP (and NDA) to BCBS of NC when team is formed.
The Internet of Everything: Connect the Unconnected
While the Internet is not a cure-all, it is the one technology that has the potential to rectify many of the challenges we face. Already, the Internet has benefited many individuals, businesses, and countries by improving education through the democratization of information, allowing for economic growth through electronic commerce, and improving business innovation by enabling greater collaboration.
So, how will the Internet evolve to continue changing and improving the world?
Cisco defines the Internet of Everything as bringing together people, processes, data, and things to make networked connections more relevant and valuable than ever before — turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals and countries.
The focus of the capstone project is to recognize innovators that can help change the world by re-inventing the manner in which unconnected people, processes, data and things are connected. We are requesting that students propose and develop innovative technological ideas, technology-based prototypes and proof of concepts (PoC) to perpetuate the goal of "Connecting the Unconnected."
Example IoE Use Cases
Wouldn't it be nice if your fridge knew exactly what, and how much, food it contained? What if your fridge and phone could sync together and notify you of the groceries you may need. What if it could calculate the recipes you could cook with the contents that your fridge currently contained? The Internet of Things can solve this by connecting the contents in your fridge with recipes and ultimately relay this information to your phone.
Stop Driving in Circles!
Finding a parking spot is a pain! What if your phone could tell where, based on your current location, you are most likely to find an open parking spot? Think of the amount of time you would save if you knew exactly where to park. Not only would this save you time, but the city would benefit by the decrease in traffic caused by cars constantly stopping and driving in circles looking for parking spots. Based on historical data, and using predictive analytics, create a connected parking application that will inform its users where to park during busy times of the day.
What if your house knew exactly when to turn on the lights based on past history of when your lights were turned on/off? Not only would you save time, but your energy consumption would decrease dramatically. Combining machine learning techniques with motion sensors/energy sensors, provided by Cisco, create an application that will automatically turn your lights on and off.
No-More Wilting Plants
You put so much time into growing your plants to the point of harvesting. Sometimes, due to your schedule and forgetfulness, your plants are neglected which results in wilting and, ultimately, a delay in the time it takes for your plants to grow to the point of fruition. What if your phone could notify you of when it was time to water your plants? Using moisture and heat sensors, provided by Cisco, create an application that connects your garden to your phone.
Monitoring/Managing Stream Usage
Data Domain is the brand name of a line of disk backup products from EMC that provide fast, reliable and space-efficient online backup of files, file systems and databases ranging in size up to terabytes of data. These products provide network-based access for saving, replicating and restoring data via a variety of network protocols (CIFS, NFS, OST). Using advanced compression and data de-duplication technology, gigabytes of data can be backed up to disk in just a few minutes and reduced in size by a factor of ten to thirty or more.
Our RTP Software Development Center develops a wide range of software for performing backups to and restoring data from Data Domain systems, including the Data Domain Boost libraries used by application software to perform complete, partial, and incremental backups and restores.
As Data Domain makes its way into even more data centers, the need to accommodate additional workloads is increased. Customers must be able to backup their data efficiently to meet constantly decreasing backup time periods. Multiple customer systems may be backing up data to a single Data Domain system simultaneously. The different backups may require different amounts of resources from the Data Domain system and place different loads on the Data Domain system. One such resource is streams, representing the number of files being actively backed up at any time by the backup application. Streams are a limited global resource provided by the Data Domain system and shared by all backup applications using the Data Domain system. Currently there is no means for different backup applications to coordinate their usage of stream resources. The focus of this project is to implement in the Boost libraries used by the backup applications a method of monitoring and managing stream usage among multiple applications so that stream usage can be monitored and managed among the applications. Students will devise and implement a method of tracking the streams used by a backup application and managing this usage between multiple backups, taking advantage of existing interfaces provided by the Data Domain Boost libraries. The purpose of the prototype is to demonstrate how stream resource usage might be better monitored and managed to improve overall backup performance without requiring any changes in the backup application software or to the Data Domain system.
We want to develop a simple prototype library in C that will act as a shim layer between the existing Data Domain Boost library and a backup application. This extra layer will implement the same interface to the backup application as the current library does. This new shim layer will monitor the stream resources available from the Data Domain system being used for backups, keep track of the streams used by the backup application, and share the information on streams used and needed by the backup application with other backup applications so that the backup applications can coordinate and manage stream usage among the applications. The goal is to be able to apportion the streams among the backup applications in a way such that all the backups receive the needed number of streams to complete their backup in an efficient and timely manner.
Sharing stream usage information among multiple backup applications will require implementing some method of sharing information between the backup processes, such as Linux shared memory segments, memory mapped files, etc. The new shim layer will need to implement some such mechanism for sharing each backup application’s stream needs and usage, and for updating this information appropriately so that all backup applications can access it. The new shim layer will also need to provide a means to update this shared stream usage data when backup applications start using a stream and finish using a stream.
When the application opens a file for writing a backup or for reading to restore a file, the shim layer will check the current stream usage of the application and other applications, and the current availability of streams on the Data Domain backup system. The shim layer will need to decide whether to allow use of another stream, based on the application’s current stream usage, the stream usage of other applications, and the streams available on the Data Domain system being used for the backups.
When a file is closed the shim layer will update the stream usage information to indicate the stream is no longer being used. This may make the stream available to the backup application closing the file, or to other backup applications depending on the current stream allocations and usages.
For the purpose of this prototype, the backup/restore application can be combined with the new shim layer. Source code from previous projects may be provided that can be used as a starting point for the backup/restore application. These sources can be modified appropriately to incorporate the new stream monitoring and managing features.
The project being proposed consists of two phases: (1) creating the shim layer with the optimized random writes and creating/modifying a backup application that performs random writes, and (2) evaluating the performance of the new shim layer.
Design and implement the basic shim library that automatically monitors and manages stream usage in a basic backup application. The library should:
Phase 2 is an open-ended evaluation, performance measurement and analysis phase using the monitoring & managing library and application produced in Phase 1 to do one or more of the following. How many and which of these items are done will depend on the capabilities of the software developed in Phase 1 and the time and resources available after Phase 1 is completed.
Benefits to NCSU Students
This project provides an opportunity to attack a real life problem covering the full engineering spectrum from requirements gathering through research, design and implementation and finally usage and analysis. This project will provide opportunities for creativity and innovation. EMC will work with the team closely to provide guidance and give customer feedback as necessary to maintain project scope and size. The project will give team members an exposure to commercial software development on state-of-the-art industry backup systems.
Benefits to EMC
The monitoring and managing stream layer will serve as a proof-of-concept for a more efficient Data Domain backup library and will allow evaluation of architectural and design decisions in current and future versions of Data Domain backup software. Performance results will aid in estimating expected application performance customers would experience in backing up and restoring their data.
EMC Corporation is the world's leading developer and provider of information infrastructure technology and solutions. We help organizations of every size around the world keep their most essential digital information protected, secure, and continuously available.
We help enterprises of all sizes manage their growing volumes of information—from creation to disposal—according to its changing value to the business through big data analysis tools, information lifecycle management (ILM) strategies, and data protection solutions. We combine our best-of-breed platforms, software, and services into high-value, low-risk information infrastructure solutions that help organizations maximize the value of their information assets, improve service levels, lower costs, react quickly to change, achieve compliance with regulations, protect information from loss and unauthorized access, and manage, analyze, and automate more of their overall infrastructure. These solutions integrate networked storage technologies, storage systems, analytics engines, software, and services.
EMC's mission is to help organizations of all sizes get the most value from their information and their relationships with our company.
The Research Triangle Park Software Design Center is an EMC software design center. We develop world-class software that is used in our VNX storage, DataDomain backup, and RSA security products.
Music performances have seen an explosion of visuals being used during performances. Performers set up complex video systems, but the currently existing software packages designed for live music visuals are often proprietary and cost hundreds of dollars. The high price of the software does not even include the hardware necessary to run the software. Our answer to this problem is OpenStage, a stand-alone open source system that can ingest sound, analyze it, and then create visuals on the fly. The system shall be aimed to run on small and cheap computers, such as RaspberryPi, but will work on all major operating systems. Unlike traditional video-jockey systems, this system would be more generative and hands-off during the performance. Rather than be purely driven by a human picking the clips to play, this system will decide what to show based on the sound, additional data sources, as well as basic configurations set by the user through the user portal.
OpenStage aims to take visuals driven by sound and broaden them to the mass number of data feeds we have available in today's world. These additional data sources will come in two forms. The first, being physical sensors placed around the environment and networked together. These sensors can range from video, ambient light, temperature, humidity, or an accelerometer. The second data source would be virtual, such as social media feeds, weather forecasts, or Wikipedia. Not only do these data sources help keep the visuals fresh, but can lead to audience participation, creating a new concert experience.
The visualizer itself will be built upon openFrameworks, a C++ framework. This gives us the ability to easily and intuitively create visuals that will be supported on many different operating systems. The wide array of add-ons for audio, graphics, and animation already built for openFrameworks greatly reduces the time investment of building the base components and opens up our time to create the most important part of OpenStage, the visuals themselves. In order to make setting up OpenStage and configuring the visualizer theme and settings as easy as possible, we will make it so that these processes can all be done via an HTTP webpage hosted by OpenStage on the physical system.
OpenStage aims at giving budding music performers an affordable and less technical avenue for creating visuals for their live performances. Since the software is open source, our revenue stream is in selling completed systems pre-configured to run OpenStage and custom made assets for visualizers.
IBM Services Asset Program: Enterprise Exception Handler (EEH)Open Project for NCSU: EEH Web, REST and Pattern Interface development
The Enterprise Exception Handling (EEH) pattern provides a standardized method of reporting exceptions in the organization through the Enterprise Service Bus (ESB) that may be analyzed in the future. Most application and system exceptions are reported in an inconsistent manner and often reported in logs that are distributed across every node in the subsystem. The pattern addresses this concern by standardization and centralization of system-wide exceptions in a heterogeneous environment.
Documentation (To Be Provided Once Team is Formed):
A Web interface to the EEH has been developed and published. However, the user interface provides a basic framework to view exceptions from the database. This interface will be enhanced in the proposed project by:
Creating a REST API based interface for consuming exception data and storing them in the database. The Java code for the exception handling daemon exists in the Core package for processing of exceptions. The project objective will be to provide an interface to this command line utility so that the functionality can be ported to API based applications.
The exception handling pattern has been one of the most effective methodologies for standardizing and reporting of exceptions for Java and Message Broker flows. However, thousands of existing applications find it difficult to manually incorporate changes to enable EEH in their applications. This project will provide the ability to scan and discover existing exception handling in the Java and Message Flows and present the developer to engineer their code with EEH by mapping these to the exceptions in the catalog.
Technology knowledge required: Java, Java Script, Ajax, Servlets, Apache Tomcat Preferred
Additional optional knowledge: IBM MQ, MQ Telemetry Transport, IBM Integration Bus, WebSphere Application Server
Ipreo has a large number of corporate clients like Microsoft, Nike, etc. Their main job is to help companies increase their valuation or in other words manage the stock price. To do this they reach out to current and potential investors and tell them about their company to create a demand for their stock. Investors have varied portfolios. They may have a portfolio comprised of securities in different regions and industries that were purchased at different times for different prices. An investor may choose to sell some or all of their shares of a given security. These decisions can be based on a variety of factors (a few detailed examples listed below). The goal of this project is to find correlations between these factors in historical data and use the correlations to identify trends that can be used to predict if an investor is going to sell in the future.
Example Factors Influencing an Investor to Sell Securities
Example Trend Factors
Why do this Project?
This model should start simply and expand from there. Start with a single model that can compare historical point-in-time cost basis price compared to the quote price with individual portfolios and use this data to make sense of those factors in the present. I.e. "Companies identified as 'Portfolio Type A' almost always sell when there is at least a 30% profit or a 50% loss, 'Portfolio Type B' is 20/70, etc..." From there, if other factors are found to be relevant they can be included if it's helpful or make different types of predictions in different models. All of these decisions will be left up to architects of the analytic models.
Students will be required to sign over IP to Ipreo when team is formed.
Approaches to & Challenges of Storing Time-Series Data on Graph Database Platforms
During one's life expectancy, there are innumerous events impacting health and wellness both favorably and adversely. Some like nutrition, hygiene and genetic disorders tend to be a direct source of causality while others like access and affordability are a secondary source of causality. Today, the health and wellness events and the recommended next steps are often captured in personal health records (PHRs), electronic medical records (EMRs) and electronic health records (EHRs). The lack of interoperability of these systems and protocols not withstanding, the advent of wearable devices used for continuous measurements on vitals as well as those used in telemedicine, poses additional challenges with aggregating multiple parallel streams of information. Imagine a physician treating a patient for an illness on an ongoing basis. Data pertinent to the patient from many disparate sources need to be viewed holistically and in select combinations, contrasted against disease progression models, in order to assess and come up with a diagnosis. It can be very challenging to assimilate this information and even worse very difficult to quickly share and discuss with peers and specialists. How can we address these challenges?
Develop an approach to manage longitudinal time-series data such as exercise, diet, medication, vitals, and feelings in one or more time-series database instances. Use OpenTSDB as the time-series database. Come up with an approach to securely manage multiple users' stream in these database instances. Users should be able to manage the privacy and consent of whom they wish to share their time-series data with and expose the interface accordingly.
Develop a mobile application that is based either on Google Fit or Apple's HealthKit that would upload configurable personal information into the above time-series database platform.
Develop an UI/UX approach to manage discrete milestones on an underlying graph database. Use Neo4J as a graph database. A milestone is a memorialization of a visit to a physician or a hospital visit during which one or more time-series of the patient may be deliberately viewed, started, altered or terminated all based on prior consent. An example of that would be a physician commencing a new medication or altering an existing dosage or exercise regimen. Also, the milestone UI should be able to visualize them in meaningful permutations and combinations. An example of that would be to view the pulse and pressure superimposed with daily exercise.
Our vision is that this collection of capabilities can mimic real-world challenges with users integrating their personal devices to their PHR systems and selectively sharing data with consent from their PHRs with their physician's EMRs and EHR systems respectively. Additional requirements on analytics may be added as functionality becomes clearer. Students are expected to think about potential use-cases that will impact privacy, security, compliance or entitlements, and think about how that would impact analytics, report generation, any visualizations and management aspects. Some research on standards, interoperability and compatibility across protocols and standards are highly recommended.
Target Source Code: Apache Software Foundation (OPEN SOURCE)
Who We Are
Schwab Performance Technology (SPT) is a subsidiary of Charles Schwab that builds comprehensive technology offerings for financial advisors to help them manage their business and their client's investments. A large part of SPT's technology team is based in Raleigh, NC. This team develops applications for desktop, web, and mobile platforms that provide industrial strength solutions using technologies and frameworks such as C#, Mongo, SQL, C++, HTML5, Angular.JS, JSON, and REST. We concentrate on the fundamentals of architecture, design, development, and quality assurance to build applications that stand the test of time which is critical in the financial industry.
What SPT Gains From the Senior Project
SPT is interested in engaging with the brightest students in top tier computer science programs. New projects have allowed us to open a number of positions for recent and upcoming graduates. We hope to establish strong ties with the best students during their senior year which could then lead to job opportunities when they graduate.
These students will build an application that will help SPT improve how the company uses agile development practices. Schwab uses SCRUM development methodologies. One of the challenges is the ability to provide a roadmap view for our business partners who require longer term planning information that is not consistent with SCRUM practices. Our development teams have created an ad hoc methodology to provide this information but it is a manual white board activity that takes significant effort to transcribe to electronic format (currently Excel) and maintain.
SPT is proposing a tool that will allow users to create visual representations of roadmaps and team layouts for Agile projects. Most SCRUM tools focus on epics, user stories, tasks, team velocities, sizings, and backlog management. While these are all important aspects to the methodology, there is a gap between the sprint to sprint view of SCRUM and the quarterly and annual view of projects required by business and finance to plan application development and deployment. The Roadmapper application is intended to bridge this gap. The tool will allow users to create SCRUM teams with specified capacities, manage backlog epics and/or user stories, establish complexities, relationships, and development needs from specific SCRUM teams, and most importantly provide a dynamic user interface that allows a team to quickly show what is required to deliver on a loosely defined set of epics. The project will deliver a working application with the intent that Schwab will use the application for future program estimation and planning.
Entering Base Information and Settings
Allow users to create the base information and settings required to drive Roadmapper. This will include: Sprint List Settings, Releases, Scrum Team Skills, Teams with staff and team's specific skill set, Products, Project List, and Epic Information.
Views to Assign Epics to Teams and Map to the Roadmap
The views are highly interactive and allow users to quickly build a high level roadmap.
The ability to automatically assign epics to teams, associate teams for epic distribution for improved velocity, and export to TFS are stretch goals.
Project Approach and Technology
The project will be based on some of the technologies SPT has been using for product development. This list may be altered depending on the teams' existing technology strengths. SPT technologies may include:
SPT strives to develop using best practices while remaining agile. We use the SCRUM methodology and will run the senior project within a similar methodology used by our teams. One mentor will be selected from our development team to be the primary interface with the project team, but other members of our staff will be brought in from time to time to assist with the project.
Success will be based on the following accomplishments:
Students will be required to sign over IP to Schwab Performance Technologies when team is formed.
Bronto Engineering Network of Sociality
Bronto Software offers a sophisticated marketing platform for its corporate clients, tracking billions of events per month. Customer events such as opens, clicks, and purchases (conversion) are recorded and analyzed in real-time and displayed to our customers to give them a clear view of how their campaigns are performing.
Years back, our CEO wrote a little employee directory for the company. We'd like to have a better one for the Engineering Dept. We will store user profiles, including team membership and current projects. We need login authentication, so no non-engineers can view it.
After these most basic requirements, the feature set is up for discussion. We expect the project would much more interesting and useful if it included some of the following features.
You also have to name this thing for us. And if there's an artsy person on your team, make us a logo.
Technology choices are up to the team, the Bronto advisors will advise. We like Java, Python, Postgresql, React, Backbone, Bootstrap.
Release Tracking Visualization
NCSU Senior Design Project
As a software center, DBGT is releasing something almost every weekend. Keeping track of the hundreds of releases across different programs and stages of readiness is a very time consuming, manual process. There is little visibility by the teams and senior management for those domains and in other areas of the company. Much time and effort are expended trying to collect, administrate, explain and update people on the status of a release and any problems that the teams are having.
We are looking for a group of dynamic students who can create and implement a working solution to solve how to manage and show the different releases for the center. Reducing the email/spreadsheet spam and creating an engaging visual application to view the dates and programs is needed. Deutsche Bank can provide a representative data set(s), a description of the current process and other information, but we need your ideas and technical expertise to create an innovative solution.
This Semester (Fall 2015)
This semester the goal is to deploy a working version of the product in Deutsche Bank. We have a list of desired actions (collect and review data, make the data visible, and socialize with the company) and some design ideas already, but we need the students' input to help make this a success. We are open to new ideas or technologies that fit within a few technical limitations (e.g., IE 11) that the bank has.
NCSU Student Experience
Senior Design students in the College of Engineering Department of Computer Science will have a unique opportunity to partner together over the course of the semester to create an information tool, as they would experience in real world product development. Additionally, students will have access to industry professionals to assist in the design, Agile/Scrum practices, and the overall developing/coding and testing.
Infusion is a global consulting company based in New York. We have offices in Toronto, Houston, Raleigh, Malta, London, Wroclaw, and Krakow. We create digital experiences and emerging technology solutions for retail, financial services, oil and gas, health and life sciences, entertainment, hospitality, education, and the public sector.
Not too long ago, the Chinese Stock market lost over 30% of its value in a matter of a few days. Why did this happen? Sure, you could find out by spending all your time watching the stock market and reading every scrap of news that you can get your hands on, but you didn't sign up for a computer science degree to do things manually. Wouldn't it be better if you had a web page that used Natural Language Processing and web crawling to aggregate all that information for you? Of course it would.
The ideal is to have automated investment portfolio performance commentary (descriptive text) informed by analytics on attribution, risk and returns data in the context of market news and events to better understand the underlying factors that drive stock performance: what changed, when did it change, why did it change, and how did it impact the stock performance.
We challenge your team to develop a proof of part of this concept to simply answer the question: what just happened with an automated deduction of why? Do this by correlating the news with activity in a portfolio: create a trigger from the Bloomberg Commodity Index (BCOM) activity that will scrape the web for concurrent news in the correlating commodity markets.
ASD Simplified Technical English Grammar Proofing
Simplified Technical English (STE), or Simplified English, is a controlled language originally developed for aerospace industry maintenance manuals. STE was created for use in the aerospace and defense industries, but other industries have used it as a basis for their own controlled English standards. STE is a carefully limited and standardized subset of English that addresses difficulties in comprehension such as complex sentence structures, confusing word forms, and ambiguous vocabulary. STE is defined by the specification ASD-STE100, which is maintained by the Simplified Technical English Maintenance Group (STEMG). The specification contains a set of restrictions on the grammar and style of procedural and descriptive text. It also contains a dictionary of approximately 875 approved general words. The specification also provides writers guidelines for adding technical names and technical verbs to their documentation.
While there are several existing products to assist in document proofing for ASD-STE, no clear leader has emerged. Additionally, none of them integrates seamlessly into Microsoft Office.
The official website is http://www.asd-ste100.org/
The team will work with KPIT to scope which aspects of the specification will be required and implement a Grammar Proofing system based on the identified rules.
This project should be created with an emphasis on code reuse and supported APIs. Significant attention should be focused on ensuring future compatibility through minimal code modifications.
The Laboratory for Analytic Sciences is charged with developing the science of analysis and analytic methodologies. This simultaneous study of tradecraft and technology serves as the foundation upon which all LAS efforts are positioned. Tradecraft is concerned with measuring, understanding and advancing the fundamentals of intelligence analysis across a variety of mission-related analyses. Technology focuses on the study of the scientific fundamentals and abstract analytics that will enable the LAS to develop a framework for data-intensive computing and intelligence analytics. Unification of tradecraft and technology shall be accomplished by creating a novel user experience to allow analysts and developers to visualize, interact with, and measure performance improvements that derive from advances in the science of analysis.
The LAS is interested in the methodologies employed by the SDC team to develop a single, dual and multi-player game. We believe the game format is compelling and are interested in the collaborative nature of the development, from the user interface to the AI used in single-player mode. Observing and measuring how collaboration among the students occurs to implement a sequence-based project contributes to understanding different kinds of goal-oriented collaboration environments. The concept of "Space Pong" derives from the Final Report of a previous SDC Project and as such was motivated by not only the mentor, but also the students participating in that SDC Project. Therefore Space Pong may be fun for the students while also serving the goals of understanding how teams collaborate to achieve a goal as well as build a reactive platform for studying subsequent autonomous and semi-autonomous agents.
The Fall 2015 LAS senior design team will design and implement a Pong game for iOS devices (iPhone/iPad). This game will have the following properties:
Docker Deployment Tool for StorageGRID Webscale
NetApp's StorageGRID Webscale is a software solution to store massive amounts of data for large archives, media repositories and web data stores. It is designed for the hybrid cloud and supports standard protocols, including Amazon S3 and SNIA's CDMI. The 10.1 and prior versions of StorageGRID run on an SLES/VMware environment.
NetApp's RTP Engineering team is making changes to the StorageGRID architecture to enable StorageGRID to run in a KVM environment and by doing so; enabling StorageGRID to run in a Docker container.
The Fall 2015 NetApp senior design team will create a CLI tool written in a high-level language, which will aid in StorageGRID deployment. The tool will extract the appropriate components from a StorageGRID TGZ file and deploy them within a KVM and a Docker environment. A goal of the project is to be able to fully deploy a StorageGRID environment on multiple KVM nodes. Deploying StorageGrid in Docker containers on a bare metal Linux operating system is a stretch goal.
Once complete, this tool will be used by NetApp personnel and NetApp customers to deploy StorageGRID. Our preference is for the tool to be written in Ruby.
NetApp will provide the team with needed hardware and software resources, along with support and training. The team will be given access to NetApp's StorageGRID – TGZ file for version 10.2 (beta) including the SGI tool. The team will also have access to a Linux server with 64 GB of RAM and 1 TB of storage and with support for Docker and KVM.
Please Only Select This Project if You Are a Member of the Oracle Preformed Team
Virtualization of Test Resources in OpenStack
The Oracle Communications Business Unit (CGBU) delivers a powerful combination of technology and comprehensive, integrated business applications for enterprises and service providers. Oracle Communications builds several products on a platform stack consisting of a customized Oracle Linux OS (TPD), a proprietary database system and several GUI interfaces. As the core of many products, this software stack is held to the highest standards of quality and is subject to rigorous testing procedures.
Oracle's CGBU Platform Design Verification group is responsible for this testing, and has developed the Design Verification Automation Tool (DVAT) to assist with the verification process. The DVAT executes automated test plans, collects pass/fail results, and reports these results back to the user. Previous senior design efforts with NCSU students have positively impacted the DVAT. These efforts resulted in two prototype features and one feature that was integrated into the production DVAT. This semester, Oracle Communications would like to further build on this success and focus on enhancing the DVAT's test automation capabilities.
Automated test plans act on server resources specified as input parameters. Today these resources must either exist physically or virtually before the test is able to use them. Resources must often be re-built to a known good starting state for a given test. Our users must track, manage, and maintain these resources or build automation on top of DVAT to fully automate testing. It would be beneficial to be able to specify a VM template to a test rather than a specific server to reduce complexity and decrease test time.
Oracle's Fall 2015 senior design team will:
Stretch goals are above and beyond Oracle's requirements for completing the Fall 2015 senior design project. Each goal requires that the design team do research, analysis, and requirements before implementation.
Improve Automated TPD Testing
Group Ownership for Resources
Heat Template Support
Resource Elasticity and Scheduling
COMCOL is a CGBU platform component that provides highly available telecom-speed database services.
The team must:
Oracle will provide:
Implementation technologies include PHP, Python, CoffeeScript, node, and restful APIs on a Fedora platform as well as OpenStack and related APIs.
Hardware-agnostic mini-game for measuring performance
The goal of this project will be to develop a mini-game to use as a workload for measuring the speed of various personal devices, including phones, tablets, Chromebooks, and laptops. Ideally, the student team would be diverse and the mini-game would reflect diverse viewpoints. The mini-game must use the device's resources in realistic ways. It does not need to be a full-featured game, but it must include a demo mode and the ability to record logging and timing information.
The project requires the following:
Implementing the components of the game engine and being able to run a scripted demo will fulfill the basic requirements of this project. As noted below, making the mini-game fully playable is a stretch goal. If the project is successful, PT will make it available to the public on the BenchmarkXPRT Web site.
Who we are
The BenchmarkXPRT Development Community has been developing tools to measure performance for over 4 years. Current offerings run on Android devices, Chromebooks, Windows-based PCs, and Web-connected devices. The XPRT benchmarks are used by major technology vendors, as well as by members of the technology press, such as Anandtech, CNET, and Tom's Hardware.
The BenchmarkXPRT Development Community is administered by Principled Technologies, a world leader in technology assessment and fact-based marketing.
Students will be required to sign over IP (and NDA) to Principled Technologies when team is formed.