Click on a project to read its description.
Over the past decade, scientists and the public have become increasingly aware of declines in the health of managed honey bees, as well as populations of wild bees such as bumble bees. To help counteract these declines, many programs encourage individuals to install “pollinator gardens” on their properties. These habitats typically draw on lists of recommended, perennial plants known to provide pollen and nectar for bees. Although these habitats can be effective, in our local outreach and extension work, we have noted that garden design remains a barrier for many homeowners who are unsure how to create an aesthetically appealing garden using recommended pollinator plants.
To overcome this barrier, we would like to offer a web-based garden visualization app that lets homeowners map recommended plants into a garden space and view the virtual planting in three dimensions prior to installation. While similar apps do exist (for example, here https://www.bhg.com/gardening/design/nature-lovers/welcome-to-plan-a-garden/), they include few if any suitable plants and lack other desirable features, such as being able to view a garden in both side view and map/top view.
Based on work with the Senior Design Center in two previous semesters, a foundation for the envisioned app now exists. It allows users to:
Working from this foundation, we would like to add the following functions:
Additional Considerations
In time, we would like to add additional features to the app/website. In case it is useful to have these on the table from the start, they include the following functions for users
During weekly sponsor meetings, we will review new developments in the app and provide feedback and clarification related to the goal of making it possible for anyone to design an attractive, bee-friendly garden.
BlackBerry QNX technology includes QNX Neutrino OS and many middleware components. BlackBerry has decades of experience in powering mission-critical embedded systems in automotive and other industries. As the leader in safety-certified, secure, and reliable software for the automotive industry, BlackBerry currently provides OEMs around the world with state-of-the-art technology to protect hardware, software, applications and end-to-end systems from cyberattacks.
For Self-Driving Cars, functional safety is part of the overall safety of a system or piece of equipment and generally focuses on electronics and related software. As vehicles become increasingly connected and reliant on software, new threats emerge. Therefore, it is imperative that a vehicle operates safely, even when things go wrong. A self-driving car is an extremely complex system with state-of-the-art technologies. Proving that the system does what it is designed to do is a great challenge. And it must do so in a wide range of situations and weather conditions. This requires a stable, secure and efficient operating system.
To ensure mission critical reliability, BlackBerry QNX continually performs extensive automated testing of their software components by executing hundreds of thousands of tests daily. The current process of identifying trends, issues and generating charts and reports from these tests consume a tremendous amount of human effort.
Automating this process and generating visual representations of trends and issues would help to identify software issues as early as possible and enable employees to focus on other tasks.
The current dashboard web application, “BlackFish”, is a Single-Page Application (SPA) developed using the Angular framework, from previous semester's senior design projects at NCSU. The SPA has provided us with a good starting platform, however, with the current structure of the project, all components must be known at build time, which does not scale well when many pages (tools) are being developed. We expect the number of tools to grow to many dozens over time, but typically an individual user will only work with a handful, so there is no need for the dashboard to load 100% of the components for a user. We would like to have a more fluid, modularized platform that allows for smaller pieces of the dashboard to be dynamically added or updated without affecting the whole dashboard.
Previous NC State senior design teams have built the framework for the BlackFish Web Application using a Single Page Application in Angular 7 for the front end, with a Node.js/Express backend, connecting to a MongoDB server (where the test results are stored).
The goal of this semester’s project is to identify an efficient way to convert the existing SPA BlackFish Web Application into a modularized platform that allows for smaller pieces of the dashboard to be dynamically added or updated without affecting or restarting the whole dashboard. This would also have the additional benefit of minimizing the initial load size and time (lazy loading).
One possible solution may be to go from one large single-page application (SPA) into a wrapper SPA made up of multiple individual SPA's. However, the team should identify and propose alternative solutions to accomplish this goal during the first few design meetings. The mentors, together with the team, will discuss the pros and cons of each solution and together determine the best implementation. At that point the team should develop the selected option using the existing BlackFish Web Application as a base.
The team will be expected to:
Some prior experience developing web (server-client) applications is strongly recommended for this project.
Members of the team would be expected to have or learn some of the following skills:
The client-side dashboard must be written in a language supported by modern web browsers; a modern JavaScript framework is strongly suggested. Any additions or updates to current code base must run on Linux and Windows OSes.
BlackBerry QNX mentors will provide support at the beginning of the project for a project overview, initial setup, and assistance in the software tool selection.
The students will also be provided with:
Mentors will be available to provide support, planning sessions, Q & A, technical discussions and general guidance throughout the project. Mentors will also be available for meetings on campus as necessary.
BlackBerry is an enterprise software and services company focused on securing and managing IoT endpoints. The company does this with BlackBerry Secure, an end-to-end Enterprise of Things platform, comprised of its enterprise communication and collaboration software and safety-certified embedded solutions.
Based in Waterloo, Ontario, BlackBerry was founded in 1984 and operates in North America, Europe, Asia, Australia, Middle East, Latin America and Africa. For more information visit BlackBerry.com
Customers rely on QNX to help build products that enhance their brand characteristics – innovative, high-quality, dependable. Global leaders like Cisco, Delphi, General Electric, Siemens, and Thales have discovered QNX Software Systems gives them the only software platform upon which to build reliable, scalable, and high-performance applications for markets such as telecommunications, automotive, medical instrumentation, automation, security, and more.
QNX software is now embedded in 120 million cars that are on the road today. Automotive OEMs and tier ones use BlackBerry QNX technology in the advanced driver assistance systems, digital instrument clusters, connectivity modules, handsfree systems, and infotainment systems that appear in car brands, including Audi, BMW, Ford, GM, Honda, Hyundai, Jaguar Land Rover, KIA, Maserati, Mercedes-Benz, Porsche, Toyota, and Volkswagen.
Several commercial products are available for Optical Character Recognition (OCR) data extraction. Vendor products however tend to be expensive. Moreover, as is true with any third party offering, consumers have limited influence over the evolution of the product and are forced to adapt internal processes to operate within the constraints of a generically designed product. Development of internal technology to support this function would not only benefit Deutsche Bank on several fronts, but also affords exploration into several different areas of cutting edge technology such as Optical Character Recognition and Machine learning.
Please review the following high level specifications and requirements.
The below will be refined in greater detail during scoping with the team.
Create a web interface to perform the following:
Students will use their course knowledge, creativity, and technical expertise to delve deep into the exciting domains of optical character recognition and machine learning. DBGT will thought leadership, direction, guidelines, technical expertise (as needed) and support in refining the project scope. The DBGT team is here to aid and guide the student workflow, processes, and design ideas. We support and encourage the students’ input on this project. Use of open source and readily available OCR or ML libraries is permitted. Students may use technologies of their choice when completing this project.
Senior Design students in the College of Engineering Department of Computer Science will have a unique opportunity to partner together over the course of the semester to explore the exciting and developing field of OCR/ML with direct application to a real business problem. Additionally, students will have access to industry professionals to assist in the software design, agile practices, and the overall code development and testing. Students will be allowed to share the final product as part of their own portfolio while job seeking.
The Dell/EMC senior design project will give the team a chance to develop software that improves the real-world performance of Dell/EMC backup and restore software. The team will identify optimal strategies for handling different types of backup / restore loads, then apply those strategies to new applications to automatically improve their performance.
Data Domain is the brand name of a line of backup products from Dell/EMC that provide fast, reliable and space-efficient online backup of files, file systems and databases ranging in size up to terabytes of data. These products provide network-based access for saving, replicating and restoring data via a variety of network protocols (CIFS, NFS, OST). Using advanced compression and data de-duplication technology, gigabytes of data can be backed up to a Data Domain server in just a few minutes and reduced in size by a factor of ten to thirty or more.
Our RTP Software Development Center develops a wide range of software for performing backups to, and restoring data from, Data Domain systems, including the Data Domain Boost libraries used by application software to perform complete, partial, and incremental backups and restores.
As Data Domain makes its way into even more data centers, the need to accommodate additional workloads is increased. Customers must be able to backup their data efficiently to meet constantly decreasing backup time periods.
This concept or requirement also applies to the restoring of the data/databases. Dell/EMC has developed technology for increasing efficiency and decreasing the time for data to be backed up and for the data to be protected.
The focus of this project is to determine the optimum behavior of the Data Domain software for several data restore access patterns. Depending on the behavior of the application performing the data restore, we need to determine the optimum settings for several parameters that will modify the behavior of our internal software library.
Students will use predefined access methods to restore data files from a Data Domain Virtual System and, based on the time / throughput of the restore, modify one or more parameters to decrease the time of the restore, improve the throughput of the restore, or both.
These parameters (collectively called a profile) will be used by our DD Boost software to optimize the restore process.
We want to determine the optimum parameters of workflow profiles for different read access patterns. Dell/EMC did a project with the Senior Design Class in spring 2019 that determined a “golden” profile for a specified set of workflows. The output of this project was a ReadOp system, which used guided Linear Regression analysis to arrive at an optimal access pattern profile for a specified workload.
In this project we want to build on that previous work and enhance this ReadOp system.
The previous work determined optimal settings for 3 specific access patterns:
In the previous work the NCSU Senior Design team used supplied tools and developed software to restore a database and, based on the results, modified one or more of the profile parameters to reduce the time of the restore. The results for each individual test were stored using an internal database and data-mined to help identify the best parameter values for each profile.
This work was done “statically” or “off-line” in that the parameters for each test run were set before the run and remained constant during the run. Each run would modify the parameters based on the results of previous test runs. Analysis of the results and determination of good parameter values was then done after the runs using data mining.
The runs used a Data Domain network optimization protocol for backup/restore of data called DD Boost. DD Boost includes an Access Pattern Detection (APD) algorithm that can identify most use cases that are supported. The APD logic can be applied to DD Boost to control read-ahead cache. If the access pattern is detected to be Sequential, read ahead caching will be enabled/re-enabled. If the access pattern is detected to be Random, read ahead caching will be disabled.
In this project the team will use an enhanced DD Boost and APD that adjusts and modifies the input APD thresholds during a single restore run, to account for changes in access patterns during a restore workload. This will dynamically adjust the APD parameters based on the history of reads during the run. The purpose of this is to determine how quickly the parameters converge to optimal or near optimal values. The parameters that control when the original input values are modified and by how much will be varied on each run to determine overall optimal system performance.
The advantages of implementing this dynamic version of APD in DD Boost are:
The project being proposed consists of three phases: (1) Become familiar with the products and results from Spring 2019 (2) Determine the optimum dynamic access pattern profile for sequential restores (3) Determine the optimum dynamic access pattern profiles for other profiles as time permits: random restore access, sequential / random “hops” access pattern, others to be determined.
Using the software supplied by Dell/EMC & from the Spring 2019 Senior Design project:
Using the software supplied by Dell/EMC & from the Spring 2019 Senior Design project:
This project provides an opportunity to attack a real-life problem covering the full engineering spectrum from requirements gathering through research, design and implementation and finally usage and analysis. This project will provide opportunities for creativity and innovation. Dell/EMC will work with the team closely to provide guidance and give customer feedback as necessary to maintain project scope and size. The project will give team members an exposure to commercial software development on state-of-the-art industry backup systems.
The data generated from this engagement will allow Dell/EMC to increase the performance of the DDBoost product set and identify future architecture or design changes to the product offerings.
Dell/EMC Corporation is the world's leading developer and provider of information infrastructure technology and solutions. We help organizations of every size around the world keep their most essential digital information protected, secure, and continuously available.
We help enterprises of all sizes manage their growing volumes of information—from creation to disposal—according to its changing value to the business through big data analysis tools, information lifecycle management (ILM) strategies, and data protection solutions. We combine our best-of-breed platforms, software, and services into high-value, low-risk information infrastructure solutions that help organizations maximize the value of their information assets, improve service levels, lower costs, react quickly to change, achieve compliance with regulations, protect information from loss and unauthorized access, and manage, analyze, and automate more of their overall infrastructure. These solutions integrate networked storage technologies, storage systems, analytics engines, software, and services.
Dell/EMC’s mission is to help organizations of all sizes get the most value from their information and their relationships with our company.
The Research Triangle Park Software Design Center is an Dell/EMC software design center. We develop world-class software that is used in our VNX storage, Data Domain backup, and RSA security products.
Siemens Healthineers develops innovations that support better patient outcomes with greater efficiencies, giving providers the confidence they need to meet the clinical, operational and financial challenges of a changing healthcare landscape. As a global leader in medical imaging, laboratory diagnostics, and healthcare information technology, we have a keen understanding of the entire patient care continuum—from prevention and early detection to diagnosis and treatment.
At Siemens Healthineers, our purpose is to enable healthcare providers to increase value by empowering them on their journey towards expanding precision medicine, transforming care delivery, and improving patient experience, all enabled by digitalizing healthcare. An estimated 5 million patients globally benefit every day from our innovative technologies and services in the areas of diagnostic and therapeutic imaging, laboratory diagnostics and molecular medicine, as well as digital health and enterprise services. We are a leading medical technology company with over 170 years of experience and 18,000 patents globally. Through the dedication of more than 48,000 colleagues in over 70 countries, we will continue to innovate and shape the future of healthcare.
Our service engineers perform planned and unplanned maintenance on our imaging and diagnostic machines at hospitals and other facilities around the world. Frequently, the engineers order replacement parts. The job of Managed Logistics is to make the process of sending these parts to the engineer as efficient as possible. We help to deliver confidence by getting the right part to the right place at the right time.
Despite our best efforts, occasionally things go wrong somewhere along the supply chain, and we fail to fulfill an engineer’s order in a timely and correct manner. To reduce the number of times this happens, we keep track of these orders, analyze them, and discuss them. Our goal with this process is to identify and address an underlying root cause.
We use a third-party logistics team to help run our warehouse efficiently. We include them in our problem-identifying discussions when a warehouse issue leads to unsatisfactory orders. We call these warehouse-related issues Customer Order Discrepancies (COD). Currently, both our team and the third-party team keep track of information about these CODs in excel spreadsheets, and there are email chains for each issue.
A dashboard provides a way to visualize and interact with a complex set of data. In addition to the CODs, the Managed Logistics team is in the process of transferring a lot of other spreadsheet-and-email-based processes into a web portal that will serve as a “one stop shop” for communication and issue tracking. [The Senior Design teaching staff expects that the student team will have access to the in-progress web portal code, and the flexibility to revise it in the process of building the Customizable Dashboard Generator.]
We are seeking a streamlined way to rapidly develop live, interactive, updatable dashboards for these processes. Our vision is that we could define some parameters for a new dashboard and have it quickly ready for deployment.
With the dashboard generator in place, we would like to create a dashboard for the COD process. With this web dashboard the ML team and 3rd Party team will be able to have more efficient problem-solving sessions because they will be interacting with the same real-time data in the same format. [The Senior Design teaching staff expects this important goal to involve data and process modeling as well as user experience design.]
Students will get the opportunity to communicate frequently with our development team and a key end-user. At first, these meetings will be useful to define what we expect in terms of parameters and customization for the dashboard generator and to mutually agree on a more concrete set of requirements. Later, we will provide clarification and feedback to ensure both goals of the project are achieved.
To get the most out of this project, experience with front-end development and Python is preferred.
Founded in Roanoke, VA in 1932, Advance Auto Parts (AAP) is a leading automotive aftermarket parts provider that serves both professional installer and do-it-yourself customers. Our family of companies operate more than 5,100 stores across the United States, Canada, Puerto Rico and the Virgin Islands under four brands, Advance Auto Parts, Carquest Auto Parts, WORLDPAC and Autopart International. Adapting the latest in data and computer science technology, the Advance AI team aims to bring machine learning and artificial intelligence to the automotive aftermarket parts market to turbo drive it forward into the digital era.
As AAP looks ahead to what the future of auto-parts retail might look like. One certain element will be maximizing the usage of mobile technology to deliver an amazing customer experience. To this end, we would propose a project to enable a fleet of independent mobile mechanics to provide common maintenance and repair services in an on-demand, ad hoc, and completely customer-centric manner (e.g. oil change or wiper-blade replacement while the customer is at work and the vehicle is in a parking lot).
The bulk of this project should center on an Android and iOS-compatible mobile app experience focused on the mobile mechanic(s). For the purposes of this project, the backend can be setup as a simple API server that would enable an eventual rollover to a full suite of services without a dramatic redesign or recode of the app.
Founded in Roanoke, VA in 1932, Advance Auto Parts (AAP) is a leading automotive aftermarket parts provider that serves both professional installer and do-it-yourself customers. Our family of companies operate more than 5,100 stores across the United States, Canada, Puerto Rico and the Virgin Islands under four brands, Advance Auto Parts, Carquest Auto Parts, WORLDPAC and Autopart International. Adapting the latest in data and computer science technology, the Advance AI team aims to bring machine learning and artificial intelligence to the automotive aftermarket parts market to turbo drive it forward into the digital era.
The customers that flow through AAP’s 5000+ retail locations form the lifeblood of our retail operations. So much of our value proposition for our DIY customers revolves around our ability to offer same-day pickup, a wide variety of parts, and friendly, expert advice, all in a convenient, nearby location. Stores play a critical part in our business and we want to make sure that we are using technology to maximize our ability to offer an outstanding customer experience. To this end, we would propose a project to create a secure, extensible Internet of Things (IOT) platform based on a Raspberry Pi that could be deployed into an AAP store and be used to collect data, run image-recognition or other computer vision tasks, and report back to an AWS-based data store.
The bulk of this project should center on the design and implementation of the AWS-based IOT backend network to centrally manage deployed Raspberry Pi(s), deliver instructions and updates, download and upload data, as well as manage memory and compute resources - all while maintaining tight security controls. We are very open to creative solutions on how best to satisfy the various requirements (see details below). For the purposes of this project, the students will be provided with an open source object detection model, and will need to implement simple wrapper code to perform inference on frame grabs from a USB webcam or similar device on the Raspberry Pi(s) for the use case of measuring in-store traffic as outlined below.
AAP will provide all appropriate hardware and access to cloud-based services
For the backend, our only hard requirement is that it is AWS-based. We would welcome student input and ideas on whether it takes advantage of all the AWS native IoT services, or takes the form of a lightweight EC2 machine (or two) coupled with S3. On the Raspberry Pi, our strong preference would be for a linux-based OS of some kind but we would entertain impassioned and/or well-reasoned arguments for other options (such as Windows IoT Core).
The Pi’s camera should capture images at a set (and tunable) interval to be processed by the object detection model. Data to be captured and shipped back:
This use case is just one of many possible uses/tasks for in-store Pi devices which could be run instead of, or simultaneously with, the traffic counting task. When designing the wrapper code to run the model and collect results, consideration should be given to modularity and reuse to simplify the addition of other tasks and functionality.
The system must be very secure, with best-practices in encrypted communication and endpoint authentication implemented. Creative solutions for protecting against threats from unauthorized physical access are welcome but would not be as high of a priority.
The Raspberry Pi must be able to operate within the store’s network and will not be directly reachable from the Internet. It must be able to call out and establish a connection to backend infrastructure for any communications, both inbound and outbound. Lightweight records of device callbacks should be recorded in some fashion independent of any data collection tasks (such as the traffic counting use case, above)
The Raspberry Pi must be able to connect to AWS-based data stores in order to download and upload data.
To the extent possible, the Raspberry Pi should minimize the number of writes it makes to its storage media during normal operations and use in-memory solutions wherever possible to support information security and help preserve the lifespan of the SD card. To the extent possible given memory and other constraints, the Pi should defer uploads and downloads until after business hours.
The Raspberry Pi will need a minimum capacity for self-management to overcome minor issues and minimize downtime (e.g. the device may have set thresholds for temperature and cpu/memory usage and automatically shut down processes or reboot, as appropriate). This system would need to be able to be deployed in over 5000 different locations by non-technical personnel, potentially in places that are difficult to reach (mounted high up or on ceilings, etc.). As such, even requiring a power cycle for a single device could result in days of downtime – needing to replace the SD card would easily be multiple weeks.
Although it would not be feasible to individually administer all Pi’s in this distributed system, in (hopefully infrequent) cases of individual units requiring manual intervention/attention from an administrator, we would need a solution for establishing an interactive connection to that individual device (which cannot be reached directly from the Internet)
AAP administrators will need an interface to manage the tasking, updates, configuration, and security settings of all, some, or just one of the in-store devices. Such settings could include:
Given the asynchronous, distributed nature of the data flow and the high downtime created by issues that may be easy to solve under normal conditions, students should consider how to test new tasking/update scripts before deployment to minimize this risk.
Interested in building a new strategy game concept to teach power engineering to High School & Middle School STEM students, as well as summer camps around Wake County? The FREEDM Systems Center at NC State is sponsoring the Smart Grid Video Game project in order to improve awareness and attitudes toward power engineering through 4X strategic gameplay. Taking inspiration from proven games in turn-based, 4X strategy like Civilization VI, Cities Skylines. Using open-source game engines like Unity, artwork software like Aseprite, & sound tools, we believe it is possible to quickly prototype a highly engaging game about power engineering.
The player will play as the CEO of their very own power company, starting from the mid-19th Century as they operate coal plants all the way until today. The player shall solve the fundamental engineering challenges of siting their power plants such that they produce the most power for the least amount of money, balancing generation with load in pseudo-real time order to supply a customer demands. This game will play out in a turn-based setting, ideally against competing AI-controlled companies.
Game maps will include a hexagonal tile system with distinctive appearances per region. This map can be 2D or 3D, depending on the complexity of making a sprite-based system vs. a space-based one. This map should be simple, attractive, and provide a nice home for the player’s assets & customers. The size of these tiles relative to the real world & the size of the game map shall be left up to the design team.
Maps can contain terrain features like forests, mountains, oceans, and rivers, which can provide buffs or obstacles to players that can be programmed later in the development cycle. The procedural map generation takes inspiration from Civilization VI, although a very simplified version is desirable.
During the player’s turn, they shall have several decisions to make:
This simple turn structure would be appropriate for making the game easy to learn and play. The turn-based gameplay makes it easy to manage a company spread over a large geographical region, and allows the player time to optimize their decisions. In comparison to a real-time game, a turn-based game focuses less on real-time grid operations and more on the experiences of system planning and running the power company.
For an initial prototype, the player will have 2 types of assets:
Generators can have several different types, fuel sources, power ratings, operating costs, and ramp rates. Smart Grid is meant to be educational, and as such the generators will reflect their real-world counterparts. A brief summary is as follows:
In the game, the player will need to connect their generators to supply dynamic loads by matching a generation profile of their fleet with the load profile of their grid. These loads will consume different amounts of power depending on the time of day. In the real world, loads can be defined by statistics like customer satisfaction, system fault rates, $ per kWh, peak load & base load. There are also qualitative descriptions, such as residential loads, commercial loads, industrial loads, & power quality-sensitive loads. Loads can be a public city or private enterprise. In addition, loads can be impacted by local policies.
Loads will serve as a primary source of revenue for the player, paying a certain rate per kWh consumed. This revenue model will likely be a challenging aspect of the Smart Grid game, and as such will require more coordination with the Subject Matter Expert to make it work effectively.
The player will need to purchase resources in order to fuel their generators. Three fueled generators will be playable in-game, including Coal, Nuclear, and Natural Gas. The resource markets for these fuels will model scarcity & increasing prices, and are heavily inspired by the board game “Power Grid”. While in power grid, the player’s turn order determined when the player will get resources, we will have a bidding system in which the player will submit a request for resources, and the game will distribute them according to a round-robin turn system on a turn-by-turn basis, with players rotating whoever gets the first resources at the lowest price. A system like this is meant to be simple & easy to program, as well as visually appealing.
Since you will only have one semester to complete a rapid prototype, it's important for you to have a good starting point and a clear direction for your project. Here are some tips to get started:
Our research team at NCSU and Intel Corporation is developing decision support tools to help management understand the issues arising in capacity management during new product introductions. This project seeks to develop a prototype of a role-playing game where managers of different organizations involved in new product introductions can assess the impact of their own decisions on the performance of their organization, that of other organizations, and the firm as a whole.
The two principal organizational units involved in new product introductions in high tech firms, such as semiconductor manufacturers, are the Manufacturing (MFG) unit and a number of Product Engineering (ENG) units. Each Product Engineering unit is charged with developing new products for a different market segment, such as microprocessors, memory, mobile etc. The Manufacturing unit receives demand forecasts from the Sales organization, and is charged with producing devices to meet demand in a timely manner. The primary constraint on the Manufacturing unit is limited production capacity; no more than a specified number of devices of all sorts can be manufactured in a given month. The Product Engineering units have limited development resources in the form of computing capability (for circuit simulation) and number of skilled engineers to carry out design work. Each of these constraints can, to a first approximation, be expressed as a limited number of hours of each resource available in a given month.
The Product Engineering groups design new products based on requests from their Sales group. The first phase of this process takes place in design space, beginning with transistor layout and culminating in full product simulation. The second phase, post-silicon validation, is initiated by a request to Manufacturing to build a number of hardware prototypes. Once Manufacturing delivers these prototypes, the Engineering group can begin testing. This usually results in bug detection and design repair, followed by a second request to Manufacturing for prototypes of the improved design. Two cycles of prototype testing, bug detection and design repair are usually enough to initiate high-volume production of the new product. Some complex products, or those containing new technology, may require more than two cycles.
The Manufacturing and Product Engineering groups are thus mutually dependent. Capacity allocated by Manufacturing to prototypes for the Product Engineering groups consumes capacity that could be used for revenue-generating products, reducing short-term revenue. On the other hand, if the development of new products is delayed by lack of access to capacity for prototype fabrication, new products will not complete development on time, leaving the firm without saleable products and vulnerable to competition.
We seek the development of an educational computer game where students assume the roles of MFG or ENG managers to make resource allocation decisions. The initial module of the game would focus on a single MFG and ENG units. Resource allocation decisions will be made manually, giving the players of the game a feel for the unanticipated effects of seemingly obvious decisions.
During the Spring 2019 semester, a Senior Design team (Callis, Callis, Davis & Deaton) developed an initial implementation of this game designed as a turn-based web-based game with a user interface allowing each player to:
This environment successfully allows a basic level of play, which we would now like to enhance in the following directions:
LexisNexis® InterAction® is a flexible and uniquely-designed CRM platform that drives business development, marketing and increased client satisfaction for legal and professional services firms.
Not every action has an equal value. Timing is critical.
Tabbi is a partner of a nationally relevant law firm. She heads a practice group known for their work in intellectual property. With a challenging workload, she always aims to leverage technology to make the best use of her time.
Besides leading litigation, she needs to ensure that her relationships with prior and prospective clients are maintained. This is how business growth opportunities are developed.
Knowing which relationships would benefit from action to maintain their relevance and value is key to her using her time optimally.
By providing Tabbi with a daily actionable list of those clients that have fallen off her radar, seamlessly integrated into her email workflow, she can schedule a meeting or a call with a single click, reinforcing that client relationship.
We will provide a test instance of our InterAction® GraphQL API (secured by OIDC; credentials will also be provided), that provides access to an example customer database together with historical activity records, including details of phone calls, meetings etc.
By examining these activity data, the system will identify customer relationships that require attention. The user will be notified when relationship activity falls below a healthy level via daily Actionable Messages (aka Adaptive Cards), leveraging the Office 365 email workflow, allowing the user to initiate new actions to help improve the relationship health.
The system will determine what a healthy activity level should be by mining historical data. A subjective, empirical formula for activity health exists, but students are expected to explore a Machine Learning approach to algorithmically derive this threshold.
The team may choose their technology stack with any mix of Javascript, Python, and C#. The Actionable Message technology is from Microsoft; they have extensive documentation and a designer the team should leverage.
PRA Health Sciences provides innovative drug development solutions across all phases and therapeutic areas. But innovation just for the sake of innovation isn’t why we do it. Side by side with our clients, we strive to move drug discovery forward, to help them develop life-saving and life-improving drugs. We help change people’s lives for the better every day. It’s who we are. Innovating to help people is at the heart of our process, but it’s even more than that. It’s also our privilege.
PRA Health Sciences has been enhancing its cyber security program and would like NC State students’ assistance building a honeypot technology to help identify adversaries in restricted networks as they begin probing the network for vulnerabilities.
Honeypots are systems that behave like production systems but have less attack surface, less resource requirements, and are designed to capture information about how potential attackers interact with it. Their intent is to trick an adversary with network access to believe that they are a real system with potentially valuable information on it. The adversary’s effort to break into the honeypot can reveal a lot of information about the adversary and enable defenders to detect and gain an advantage on this threat.
In this project, students will create low-Interaction production honeypots, which can be configured and deployed easily, remaining cost effective in large numbers. There are two parts to this project: 1) a central management system to configure, deploy, and track honeypots and their data, and 2) the honeypots themselves, who will report back to the management console with their status and attacker interaction data.
The central management system provides a way to monitor and control honeypots and serves as a central repository of data collected by each honeypot it controls. Users should be able to view, modify, and create honeypot configurations. Students are encouraged to leverage existing technologies, such as Modern Honey Network. This system needs to be able to deploy honeypots on virtual hosts using all kinds of virtualization technologies and support a wide variety of hardware such as normal PCs, Arduino Boards or Raspberry Pis.
The initial configuration of a honeypot will consist of an Nmap scan of another system that the honeypot must attempt to replicate. With this input, the honeypot should configure its simulated services so that an Nmap scan to the honeypot produces no detectable differences, while having as little attack surface as possible. Additional configuration options will allow:
To be effective, a honeypot must allow connections and simulate protocols from different layers:
Each honeypot is expected to have data collection capabilities to be able to detect when it’s scanned or connected to and report the follow information to the central management system or SIEM:
To test the successful configuration of a honeypot, Nmap scans with varying configurations of both the honeypot and the original system can be compared. To aid in testing and reduce false positives, the honeypots need to be able to “whitelist” traffic or IP addresses. As a stretch goal, a single honeypot should be able to acquire more than one configuration and network address to simulate multiple systems at the same time using the same hardware/virtual machine.
Modern Honey Network is the preferred central management system and Python the preferred development language. Students should use best practices for the development of secure software and document in great detail. Student should also strive to design honeypots to be modular so that additional protocols can be introduced, and additional functionality can be created, e.g. RDP, SSH, SMB Honeypots, Honey Websites and active beaconing. This Project is intended to be released as an open source project on GitHub and continued after the team has finished.
Deep learning has revolutionized the field of computer vision, with new algorithm advancements constantly pushing the state-of-the-art further and further. The fuel for these algorithms is data, and more specifically data for which the ground-truth reality is known and specified. In the case of computer vision applications, this means having images available which are classified (classification), in which object locations are identified and labeled (object detection), or even in which each pixel is assigned to a specific object (instance segmentation). Annotating images with labels is tedious and quite time consuming, but having a good tool can facilitate the process.
The goal of this project is to design and develop a modular, web-based, interactive tool for annotating data instances (images) so that they can be used for training deep learning models. The specific focus will be annotation of images used to train models for computer vision applications, but the design should account for future extension to annotating text and audio data as well. With the ultimate objective being to significantly improve the ease and speed of annotation, this project should incorporate usability features with deep learning models that can augment the annotation process by suggesting bounding boxes. An addendum will be provided to the formed team working on this project.
To provide a good understanding of the desired tool, students should spend some initial time researching the capabilities of other image annotation tools, including RectLabel (https://rectlabel.com/), Supervisely (https://supervise.ly/), and Intel's CVAT (https://github.com/opencv/cvat).
The implemented tool will enable the user to:
Stretch goals:
In addition to the items above, the labeling time must be calculated for each task.
If time permits, students should perform studies comparing the time to perform labeling of a certain number of images (e.g. 100) with those of the other tools researched. Possible reasons for being better or worse compared to the other tools should be discussed in a report.
Overall, this project is a combination of (1) software engineering (e.g., use cases, stakeholders, design, frequent deployments, test driven development, robustness, etc.) and (2) data science (e.g., training an object detection model and use it to help for the annotation).
Familiarity with modern javascript frameworks is important to build an interactive Web-based system. Further, familiarity with supervised machine learning methods is beneficial to understand the content. Particularly, experience with deep learning methods to solve computer vision problems such as object detection is nice to have. Python will be used to invoke SAS Deep Learning models for suggested labels.
The target users of this project are internal and external customers. However, the customer data usually are protected. Therefore, to measure the performance of this tool, the SAS Deep Learning Team will be providing a custom dataset that has no copyright issues so that the team can utilize it without any issues. The dataset will have roughly 10,000 images and the task will be object detection.
Bandwidth Inc. owns and operates a nationwide voice over IP network and offers APIs for voice, messaging, 9-1-1, and phone numbers. Bandwidth R&D is currently exploring emerging technologies in real-time communications using WebRTC coupled with our core services. Primary technologies include (but not limited to): WebRTC, AWS Lambda, Java, SIP, OpenVidu, React, and AWS Cognito.
This project provides an exciting opportunity to develop a best-in-class user experience for managing sessions on a real-time communications (RTC) platform. The platform APIs provide data about sessions, packet loss, jitter, and various network conditions as well as session and client state. We need a component-based web application that consumes these APIs and displays session information in both high-level and detailed views. Early in the semester, we will provide a more complete requirements document and a working example of a similar web application, but in a nutshell the major components/features will be:
Example 1 - Session Graph UI
Example 2 - Session List/Grid
Example 3 - Drill Down
The goal of this project is to detect lateral phishing attacks. A normal
phishing attack occurs when an external attacker tries to get a target's
credentials by sending them an email that attempts to get the target to
click on a link that contains malware. In contrast, in lateral phishing
the attacker already has access to the credentials of an employee of the
target organization. In this case they are leveraging their access to
perform phishing attacks within the target organization to gain additional
credentials. As with normal phishing, the degree of sophistication and
targeting varies, from wide email blasts to very targeted spear-phishing.
The team will develop an approach to detecting lateral phishing attacks.
Previous work in this area has used supervised learning approaches
(specifically, a random forest classifier). In this case, we will focus
on a combination of detecting unusual activity from individual users based
on a social network graph, along with an analysis of the content of the
email itself. That is, the team will develop social network graphs that
represent user interactions based on email communication. Thus an email
that is addressed to a target that is outside the user's social network
would be anomalous. Similarly, if a user sends email to a group of
people who are not normally grouped together (e.g., that represent two
different projects or two different parts of the organization), then this
should be flagged as anomalous. From the email content perspective, the
features of interest would include that there is a link provided in the
email and the reputation of the target domain (e.g., cnn.com versus
xfwer8uiydf.org) of the link. The algorithms should be easily extensible
to include new data and features (e.g., user roles).
Public datasets will be used for developing and testing the approach. More
specifically, we will leverage the Enron data set at:
https://www.cs.cmu.edu/~./enron/
This dataset does not contain any lateral attacks, and so the team will need
to hypothesize some lateral attacks and inject those into the dataset to use
for training and testing the algorithm. The dataset will be divided into
two subsets - one for training and one for testing. Two types of testing
will be performed: (1) testing the performance of the algorithm in terms of
detection capabilities, and (2) testing the robustness of the algorithm. In
this latter case, we will inject errors in the testing dataset (e.g., malformed
header information) to ensure that the algorithm handles these gracefully.
The output from the algorithm should identify any accounts that appear to
perform a lateral phishing attack, along with an explanation for why that
account was flagged (where possible). The output should be provided as a
dashboard that includes other summary statistics (e.g., number of emails
processed, number of unique senders, average number of recipients per email)
In addition to the classification performance for detecting lateral phishing,
the team will also need to consider the scale at which the final classifier
will need to perform, ensuring that any developed algorithms are sufficienty
fast and lightweight.
The development environment will include Python3, Jupyter Notebooks, Anaconda
(scikit-learn, tensor-flow, etc.)
The following provides a high-level overview of the project steps, which will
be refined during project scoping with the team.
Understand project background
Prepare data source
Train and test the algorithm
Present results
Stretch Goals
Once a classifier has been developed to process data as a one-time action,
the stretch goal would be to convert this to a version that runs continuously,
analyzing email as it is received.
Additional Information
Recent academic papers in this space include
Objective
While mental health awareness is increasing, there is still a stigma associated for individuals who seek help. Traditional options for treating mental health conditions are expensive and possess a large barrier to entry, particularly for younger generations. Blue Cross NC sees an opportunity to change the way mental health is approached by bringing awareness and treatment through an easy and accessible medium. We at Blue Cross NC want to partner with North Carolina State University to develop a mobile app to change the way mental health awareness and treatment is provided.
Solution
Develop a cross-platform mobile app to bring awareness and strategies to improve mental health. The app must be easy to use for young adults (the initial target audience). Features the app can include are:
Expected Features:
Stretch Goals:
Blue Cross NC welcomes and encourages feedback from students during all phases of development to positively impact as many people as possible!
Learning
The team will strengthen their technical and soft skills during this project. Students will utilize their skills in programming languages / UI expertise to develop the raw framework of a cross platform app. This group will also get the opportunity to develop their soft skills as they will need to be peer facing and collaborative with many different parties as they partake in build-measure-learn feedback loops.
Project Outline
Gather user requirements and understand what the college aged consumers of a mental health application would want to see.
Build a cross platform app framework to begin testing user requirements and monitoring actionable metrics.
Burlington Carpet One has been owned and operated by the same family since 1972. We are a full-service flooring company catering to retail customers, contractors, commercial companies, and realtors. Currently under the guidance of our second generation, we have joined the Carpet One Co-Op (CCA Global), the largest co-op in the world, comprised of over 1000 like-minded flooring store owners, committed to positively impacting the flooring industry as well as our local communities. Unlike the majority of flooring companies, we employ in-house flooring technicians, not subcontractors.
While many industries have access to text/email and schedule-based customer communication/scheduling platforms, i.e: physicians, dentists and hair stylists - the flooring industry is lacking in this arena. While we are currently using a scheduling dashboard by “FullSlate ”, it only addresses approximately 50% of the features needed by our industry. A web-based application specifically for the flooring industry would facilitate maintaining a scheduling dashboard also equipped to communicate scheduling dates to customers with multiple precursory reminders, as well as follow up maintenance reminders. All communication will be delivered via email and/or texts, based on customer preference. In addition, each scheduled block should allow for the assignment of an installation in-house, or sub-contractor, technician, as well as flooring material assignments and retail sales associate assignments. It is important that the dashboard accommodate several functions:
Design & build a web based dashboard to maintain an installation schedule similar to below:
Each time slot should have the ability to book multiple installations, not just one (the problem with current dashboard options). Multiple crews will begin their day in the 8 am time slot. Some jobs last an entire day, some do not. Some crews may have 1-3 assignments daily – while most will only have one daily.
Each entry will need to connect with a database of installation technicians, both in-house and sub-contractors (from drop down menu).
Each entry will need to connect with a database of retail sales associates (RSA) (from drop down menu)
Each entry will need to connect to a flooring category: LVP, LVT, Ceramic, Carpet, Engineered Hardwood, Solid Hardwood, Sand & Finish, etc. (from drop down menu),
Each scheduled time slot should notify the customer of the scheduled installation date using preset templates (email/text ).
Preset templates through automatic messages will be set for 1 week prior as well as 1 day prior to the installation, which will be associated with each customer and time slot
The day of installation should have an easy to use OTW (on the way) message option, notifying the customer that their crew is on the way, along with a photo of the assigned installation technician.
A preset template should follow the completed installation and request a company review by being directed to grade.us (online review platform)
Allow for multiple location coordination using separate or shared staff/calendars.
Allow for synching with Quick Books & RFMS accounting software eliminating double entry of customer information.
Calendar should offer viewing by different screen options: 1) customer calendar 2) installation calendar 3) RSA calendar.
Web design experience would be helpful . Students are free to use any appropriate technologies to build this system.
NTP is a widely used protocol that provides time synchronization for endpoints worldwide. NTP implementations have suffered from weak security and various implementation flaws for a long time. The IETF is standardizing two new options for symmetric (draft-ietf-ntp-mac) and asymmetric (draft-ietf-ntp-using-nts-for-ntp) authentication of NTP. These drafts, when standardized, aim to address many of the security concerns and drawbacks that existed in older NTP authentication methods.
The drafts are implemented by major open-source vendors that provide NTP implementations. Cisco specifically funded open-source project NTPsec to accelerate the faster implementation of these standards and plans to use these implementations in its products and offerings. It is something that our customers expect, so products need to provide.
For this project, we want to integrate and test the NTPsec implementation of these two new standards. Students will need to read the NTPv4 protocol and these two standards so that they can make use of their implementation by NTPsec. The NTPsec implementation would need to be tested and its compliance with draft-ietf-ntp-mac and draft-ietf-ntp-using-nts-for-ntp needs to be verified.
The students are expected to first design and deploy an infrastructure for testing their code. That infrastructure will include:
Students will also have to design and implement two configurable “wrapper” programs that make use of the library to demonstrate how someone would integrate a client and server with it. The programs do not have to be written in C. They could be written in Python or any other common language. They will be a client and a server that allow for command configurable options, like role (client, server), authentication type, size of the keys etc. For example, we would run command ‘nts-wrapper-client nts_server xyz’ to configure the NTS server in the ntpd configuration file and restart the service. The server wrapper program should run in the AWS or Google Cloud server and the client wrapper program should run in Client1. Both programs should be shown to configure the service correctly and demonstrate that clocks are synchronizing. Students are also expected to identify the applicable library configuration lines that adopters of the library would make use of to integrate in their code.
Students must have
Students will be able to work on a practical industry problem. Security is a hot topic today and NTP is a fundamental protocol for the operation of the Internet. Students will have a chance to see how important protocol security is and what practical industry problems it can create. They will also get a chance to contribute back to a very useful library used in almost every operating system and vendor product.
Cisco has been looking into secure NTP for some time. We want to ensure that our products and offerings can make use of the NTPsec implementation to secure how they synchronize time. It is something that our customers expect and products need to provide.
Across different organizations in Fidelity on certain nights, many teams will come together to release new features into a Production environment for customers. There is a central team that is responsible for the coordination (Ops) and then other teams that are responsible for ensuring that, as the install is in progress at different stages, the applications or services that they own are validated/certified and signed off on. During this night you could have upwards of 100 teams that are introducing new features and needing to certify their features as part of the install.
At the moment, this coordination for specific groupings of teams is done via a chat room, spreadsheets, telephone, and email for a specific grouping of teams, which makes it extremely tedious to maintain. Furthermore, it becomes the sole responsibility of a few that lead their groupings of teams to make sure and request status updates for sign off of the applications. This manual process is not ideal and takes away from the few individuals monitoring for their group.
It would be useful for the various teams to have a dashboard that serves as a central location for all the teams in order to communicate during and around the production release. Furthermore, as folks may need to look back on prior installs, having an audit of the events from the install and essentially a history of the dashboard from prior months would be helpful.
At a high level a platform could have an admin page and a general user page. These pages would allow users of different roles to view the stages that are open for validation and sign off validation for their respective team. This would create a much more manageable way to verify sign off from the teams and potentially even allow for auditing in the future.
The vision is a web-based application that would allow many different people to access the site in order to work together in filling out, updating the necessary information that is needed for communicating the Install plan. These technologies are also specifically chosen as it is most common to what Digital Platforms developers use today and would lead to a good transition for enhancements in the future.
The system mentions an authentication and authorization system based on different types of users; for the purposes of this project we would like that capability stubbed out (simple is fine here) and done in a plug and play way so that when the project is handed over, we can incorporate the Fidelity authorization system.
For information about what teams are installing for a given release, Fidelity has a system already in place that exposes an API for this data. What we would like to do is have the NCSU team take the contract and stub this data out for their prototype so that it later can also be incorporated on the hand over.
Having a common platform that will allow users to be able to understand and communicate the accurate status of where we are in the install/release night process will ensure that we take steps to reducing a tedious manual process for folks and allow for more folks to use chat to focus on the current issues and how we can tackle them during the night.
Additional Information and Use Cases will be provided once team is formed.
For more than 40 years Microsoft has been a world leader in software solutions, driven by the goal of empowering every person and organization to achieve more. They are a world leader in open-source contributions and, despite being one of the most valuable companies in the world, have made philanthropy a cornerstone of their corporate culture.
While primarily known for their software products, Microsoft has delved more and more into hardware development over the years with the release of the Xbox game consoles, HoloLens, Surface Books and laptops, and Azure cloud platform. They are currently undertaking the development of the world’s only scalable quantum computing solution. This revolutionary technology will allow the computation of problems that would take a lifetime to solve on todays most advanced computers, allowing people to find answers to scientific questions previously thought unanswerable.
SystemVerilog is a hardware description and hardware verification language used to model, design, simulate, test and implement electronic systems. Currently, there exists some extensions for Visual Studio code to support SystemVerilog entry. Of them, we've identified one that seems best, and would like to contribute to it and add features that would make it a compelling tool for all hardware engineers at Microsoft. The features to the extension will make it easier and more user-friendly to write/navigate SystemVerilog.
This project will allow students to gain experience in not only designing, coding, and testing the features added to this extension but also collaborating with resources on various teams and sharing the designs and tests of these features. The students on this project will work on multiple features that they will then bring together and ensure effective communication is taking place so that they can learn what the other is designing, coding, and testing.
The following Github Repository will be used throughout this project:
https://github.com/eirikpre/VSCode-SystemVerilog
Link to existing extension:
https://marketplace.visualstudio.com/items?itemName=eirikpre.systemverilog
The development team will be provided with details on the existing features, details on additional features being requested, and sample inputs to test features.
StorageGRID is NetApp’s software-defined object storage solution for web applications and rich-content data repositories. It uses a distributed shared-nothing architecture, centered around the open-source Cassandra NoSQL database, to achieve massive scalability and geographic redundancy, while presenting a single global object namespace to all clients.
Supporting a scale-out distributed system like StorageGRID presents a unique set of challenges. If something goes wrong, the root cause analysis may require piecing together clues scattered amongst log files on dozens of individual nodes. Due to the complexity involved, a lot of information is recorded for every transaction, but this leads to an embarrassment of riches – often, identifying what information is relevant is the hardest part of solving a problem.
Indeed, one of the most common questions someone asks when reviewing reams of StorageGRID logs is, “Is this normal?”. Would I see this set of log messages occasionally on a healthy system, or are they unusual, and therefore possibly related to the problem I’m investigating?
In 2018 StorageGRID sponsored an NCSU Senior Design project that became known as “Logjam” – a log ingest, indexing, and analysis dashboard for StorageGRID based on ELK (elasticsearch/logstash/kibana). The primary goal of Logjam is to provide quick, evidence-based, statistical answers to “Is this normal?”
To do so, Logjam must derive value from a large set of unstructured data. NetApp’s 24x7 Technical Support team collects gigabytes of log files from many different NetApp products every day. This material goes into two NFS file shares (one in North America, the other in Europe), which today contain petabytes of historical log data. The only consistent organizational elements in this log repository are the top-level directories, which have names that match the “ticket numbers” of the customer cases that necessitated collecting the data underneath that directory. Descend into a numbered case directory, and you might find virtually anything.
This introduces the first problem Logjam needs to solve – separating useful data from junk, and applying structure to it to make it easier to work with. You have petabytes of data, loosely organized into thousands of numbered case directories, which is being updated constantly. A lot of this data is compressed or otherwise obfuscated – it is difficult to work with. Most of the cases relate to other NetApp products (not StorageGRID) – these cases should be ignored. However, there is no accurate way to tell from the case number whether StorageGRID data was collected – the only way to determine this is to look at some of the data.
Clearly there is a need to do a one-time full scan of the entire multi-petabyte repository looking for StorageGRID cases, but doing regular full scans is unlikely to result in acceptable performance. Instead, after the initial scan, Logjam should be able to keep itself current by doing incremental updates – finding and indexing files and directories that have changed since the last scan. This problem is not well-solved by the current Logjam implementation.
The second problem Logjam needs to solve is what structure to apply to the StorageGRID case data to make it easier to work with, specifically to help answer the question “is this normal”? This is a categorization and indexing problem, and to some extent a parsing problem. The current Logjam implementation includes a decent solution to this problem, but enhancements are certainly possible.
The third problem Logjam needs to solve is, how can average users, who are neither StorageGRID nor ElasticSearch experts, take full advantage of the cleaned and structured case data. One element of the solution here might be to literally add an “Is this normal?” button to the Kibana-based UI, allowing users to paste in the log they’re questioning, click the button, and get information on frequency, cross-references to case numbers, perhaps a visualization. This is a query formulation and UI / UX problem. The current Logjam implementation makes a good attempt at solving this problem, but would benefit from additional work.
The fourth problem Logjam needs to solve is, how can users get access to Logjam? StorageGRID has support engineers in Vancouver, RTP, Amsterdam, and various locations in Asia.
One centralized Logjam instance is unlikely to be the best solution from a performance or reliability perspective; instead, it should be easy to spin up Logjam instances on user-provided compute clusters, so the tool can be deployed close to where it’s being used. This is a DevOps problem. The current implementation makes an attempt at solving this problem, but there’s a lot of room for improvement.
2025 | Spring | ||
2024 | Spring | Fall | |
2023 | Spring | Fall | |
2022 | Spring | Fall | |
2021 | Spring | Fall | |
2020 | Spring | Fall | |
2019 | Spring | Fall | |
2018 | Spring | Fall | |
2017 | Spring | Fall | |
2016 | Spring | Fall | |
2015 | Spring | Fall | |
2014 | Spring | Fall | |
2013 | Spring | Fall | |
2012 | Spring | Fall | |
2011 | Spring | Fall | |
2010 | Spring | Fall | |
2009 | Spring | Fall | |
2008 | Spring | Fall | |
2007 | Spring | Fall | Summer |
2006 | Spring | Fall | |
2005 | Spring | Fall | |
2004 | Spring | Fall | Summer |
2003 | Spring | Fall | |
2002 | Spring | Fall | |
2001 | Spring | Fall |