Projects – Spring 2007

Click on a project to read its description.

Create a Route Network Definition File & Mission Definition File Creator/Editor

Background

Defense Advanced Research Projects Agency (DARPA) has announced a new Grand Challenge -- the Urban Challenge. The previous Grand Challenge was to construct an autonomous vehicle that could navigate an off-road course (approximately 130 miles) and demonstrate the vehicle’s capability in a “race” across the desert. This race occurred in October 2005 and was successfully completed by 5 entries. The new Urban Challenge focuses the venue on a mock urban setting, requiring autonomous ground vehicles to merge into moving traffic, navigate traffic circles, negotiate busy intersections and avoid obstacles.

Insight Racing, a local team that built a vehicle that qualified for the last Grand Challenge, is planning an entry for the Urban Challenge. Insight's entry into the last Grand Challenge completed 28 miles of the 132-mile course, placing them 12th out of 23 entries -- a very respectable effort.

Project Description

The Route Network Definition file contains a description of the road network that will be used for the DARPA 2007 Urban Challenge. The Mission Definition file contains a set of checkpoints and speed limits that define a mission on the road network that is to be driven.

Pick up last semester's project and add new functionality to graphically edit and create the contents of the Route and Mission files. Create/edit new files an interface to a GPS can use to collect the lat/long points. Make an RNDF file for the NC State Campus that can be used for testing.

This code should probably be written completely in Java.

Insight Racing will provide the GPS interface code if the team would prefer to focus on the editor, or the team can develop the interface using standard RS-232 and NMEA messages.

DARPA 2007 Urban Grand Challenge Robotic Network Status & Performance Monitor & Computer System Process Manager

Background

Defense Advanced Research Projects Agency (DARPA) has announced a new Grand Challenge -- the Urban Challenge. The previous Grand Challenge was to construct an autonomous vehicle that could navigate an off-road course (approximately 130 miles) and demonstrate the vehicle’s capability in a “race” across the desert. This race occurred in October 2005 and was successfully completed by 5 entries. The new Urban Challenge focuses the venue on a mock urban setting, requiring autonomous ground vehicles to merge into moving traffic, navigate traffic circles, negotiate busy intersections and avoid obstacles.

Insight Racing, a local team that built a vehicle that qualified for the last Grand Challenge, is planning an entry for the Urban Challenge. Insight's entry into the last Grand Challenge completed 28 miles of the 132-mile course, placing them 12th out of 23 entries -- a very respectable effort.

Project Description

The computer systems that will be used for the LoneWolf DARPA 2007 Urban Challenge are networked using Ethernet. They communicate using a UDP-based protocol that is compatible with the Joint Architecture for Unmanned Systems (JAUS) standards. We need a network monitor that will monitor the messages that are passed between components within the vehicle and gather relevant statistics about the traffic. This system also needs to make sure that the proper component processes are up and running on each Linux computer system and healthy. The monitor should also feature a Java-based Graphical User Interface (GUI) that shows network utilization and process/computer system status in real time.

Insight Racing will provide complete documentation on the JAUS protocol that they are using for interprocess communication.

The process monitor should probably be written in C and the GUI written in Java.

Customizable NCSU COE Website for Alumni

There are some 47,000 alumni of NC State’s College of Engineering. The College’s future will be strengthened by the support and volunteerism of its large alumni base and other friends of the College. The NC State Engineering Foundation is currently improving its Web site. As part of that effort, the Foundation seeks assistance from a senior design team in Computer Science in building tools that will allow visitors to customize their College of Engineering Web experience.

Our project will involve a high degree of creative latitude—the Foundation is looking for solutions to strengthening the user experience in ways that will help to foster a lasting network of alumni. Therefore, we expect that solutions should encompass aspects of social networking and Web 2.0.

For instance, some of what we would like to accomplish could be done through already existing third-party social networking sites like LinkedIn, the successful business networking site. Some alumni groups have utilized the site to organize more efficiently their own efforts, and to enhance the information they have about alumni. Exploring ways to integrate our customized site with that kind of third-party site should be a part of this project.

Elements that we have identified as core pieces of this project:

  • Site registration for all unique visitors (not required)
  • Ability to update contact and employment information
  • Searchable alumni directory (database)
  • Ability to select news items of interest to the user
  • RSS module that automatically displays items of interest to unique visitors based on their selections
  • Ability to sign up for volunteer opportunities within the College
  • Ability to sort “Class Notes” by year of graduation, department
  • Blog functionality—we would like to consider a Dean’s blog for the site

Performance Data Storage

Background

In characterizing the performance of any given computing environment there are numerous components that must be monitored in order to obtain a complete understanding of the performance of the system. Collecting data from these component systems yields a set of data that is too large for convenient analysis of the entire set, but the system must be analyzed as a unit so that the performance bottlenecks may be identified. Since we can run multiple tests on a given system, and that multiple systems may be under test at a given time, it quickly becomes clear that storing the collected data and retrieving a test result for analysis is a problem that can quickly become non-trivial.

Objective

We need to have our existing data collection tool modified to support storing collected data in a database that will be common across multiple tests, and an interface for that database that will support retrieving data from a specific test for analysis in other tools.

Example

In our example we will be monitoring four different inputs, the benchmark application that the user is running, the performance counters on the system running the benchmark, the statistics from the Celerra, and the data from the Clariion storage array. In our current environment these results will be returned to the user as four different CSV (Comma Separated Values) files and will be aggregated into one file on a common time index using an existing tool. The current output will be another CSV file. The table below represents what such a file might look like. Please note that it is possible, and common, to have empty space for a given statistic at a given time since each system may report data at a different interval.

TimeBenchmark IOPSServer CPU UtilizationCelerra CPU UtilizationClariion CPU Utilization
00:0011025%10%4%
00:1012026%  
00:2013326%11% 
00:3015025%10%4%
00:4010025%  
00:5012026%11% 
01:0013026%11%4%
01:1012525%  

The simplest case that we can consider is when we have one benchmark, running on one server, connected on a dedicated network to a single Celerra and a single Clariion. These are each components of the larger system that can be monitored independently. In this scenario we will have a minimum of several hundred data columns. It should be obvious that such a data table will be difficult to analyze without first limiting the number of statistics that are included, and that the problem becomes worse as you increase the number of components in the system.

The issue that presents itself is how to maintain that data over a long time period and ensure that all the test data is available when needed. The system collecting the data may be replaced, or multiple users may set up their data collection scripts to store files in different places. Storing data in a central location will help to alleviate this problem.

In this project we want to modify an existing aggregation tool to support storing its output in a common database that you will design, and provide an interface to that database to allow easy retrieval of the statistics from a given test.

Requirements:

  1. Design a database structure that will support the storage of a large amount of test data with minimal administrative overhead. At a minimum the database should be able to handle 1GB of data added per day for a year.

    One of the challenges in this project is that for each test not only can the number of each type of system being monitored change from test to test, but the amount of data reported by that system type can change due to the specific configuration of a test system, and you can have multiple systems running tests and reporting results at the same time. The data structure will need to be able to handle these changes and return the proper data when a query is executed.
  2. Modify the existing data aggregation utility to allow it to report results to the database.

    The current behavior is to output the data to a CSV file. We would like to retain this output option for individuals who want it, and provide the ability to output to the database.

    For this requirement you will also have to keep track of which test the data belongs to, and some information about the test. In this initial version of the database it will be sufficient to store the configuration file from the data collector in the database along with the collected data.
  3. Provide a user interface that allows engineers to easily retrieve, export (and import), or purge their test results.

    We will obviously need to retrieve results in order to make use of the system. The retrieval interface should allow a user to select the statistics and time ranges that will be returned so that extraneous data does not unnecessarily complicate the analysis.

    We also need the ability to export a whole test result so that it can be archived with a project, and later import that data back to the database for further analysis if required. At certain times we may also want to purge a test result (for example, after an export) to control the size of the database.
  4. Support the ability to create, read, and write from multiple databases. This will allow us the flexibility to partition data by product families over time. Export/Import must allow the ability to export a complete set of test information from one database and import it into a new database.
  5. Provide access controls that associate test results with the user who generated them and limits destructive behavior to the owner of a test, or an administrator.
  6. Provide instructions and scripting suitable for easily installing the database and associated software on a specified system.

    This solution may be of interest to more that one group or department, so it will be necessary to provide a way for each interested group to set up their own environment.

Value to EMC

EMC software developers and performance engineers will use this tool to study blended performance data from complicated performance setups. The tool with provide them a means to store the data in a consistent way, and later retrieve what is necessary for a specific report, chart, or analysis.

Company Background

EMC Corporation is the world leader in products, services, and solutions for information storage and management.

We help customers of all sizes manage their growing information—from the time of its creation to its archival and eventual disposal—through information lifecycle management. EMC information infrastructure solutions are the foundation of this mission. An EMC information infrastructure unifies networked storage technologies, storage platforms, software, and services to help organizations better and more cost-effectively manage, protect, and share information.

EMC Automated Networked Storage combines SAN (Storage Area Network), NAS (Network Attached Storage), and CAS (Content Addressed Storage) environments into an integrated networked storage infrastructure. Combined with our open storage software products we unify networked storage technologies, storage platforms, software, and services to enable organizations to better and more cost-effectively manage, protect and share information.

Our vision is to create the ultimate information lifecycle management company—to help our customers get the maximum value from their information at the lowest total cost, at every point in the information lifecycle.

The Research Triangle Park Software Design Center is an EMC software design center. We develop world-class software that is used in our NAS, SAN, and storage management products.

Build a Merchandising Space Planning Application Using Second Life as the Platform

Retailers and Wholesalers need a highly graphical and collaborative environment for designing and communicating space allocation for merchandise. Second Life is a highly graphical web based multiplayer platform that allows people to interact and discover virtual environments. Can we leverage such a platform to integrate directly with the real world activity of Merchandise planning, allocation, and space planning? We want to research the feasibility of using Second Life for this purpose.

Project goals are to:

  1. Collect and document requirements for a 3D Merchandising Space Planning application.
  2. Conduct a feasibility analysis on using Second Life as the platform for implementation. This will need to cover all legal, social, and technical issues.
  3. Implement a Space Planning prototype that integrates seamlessly with the SAS Merchandising Suite.

Undergraduate Scholarship Community Website

The Needs

Cisco has an undergraduate scholarship program that funds four years of college and offers 3 summer internships. There are two processes that need support online:

  1. The Scholarship process. This requires forms for initial applications, forms for use by the college, e.g., description pages for the college website, candidate selection and acceptance forms, application status, and forms for various topics like academic interests, and so on.
  2. The Internship process. This requires additional information like job type interests particular to their circumstances. For example, location, type of work, etc. Cisco tries to match scholar requests and to place them, if possible, in a related internship. At the end of the summer both the intern and the Cisco supervisor create a short report on the experience.

High level requirements

Obviously there are a number of roles and concerns in this type of a project. There are users, each with specific access rights. There are a number of different persistent stores. And there are all the attendant view, report, and administrative functions. Some of this is covered below.

  1. Application – The online application form. One per candidate. This may include associated documents like scanned in transcripts or acceptance letters but we cannot count on everyone being able to get all their documentation online. This means there will have to be some method for tracking and associating offline documents and their state changes (like received, sent, late, etc).
  2. Accept/Decline Letters – These are online forms that are used to formally register a candidate accepting or declining a scholarship. In the case of an acceptance of a scholarship offer, it is important to make sure the candidate has read award conditions carefully.
  3. Candidate – An online form gathering candidate personal information.
  4. Scholar – an accepted scholarship. Used by both the scholarship process and the internship process.
  5. Internship Interest – One per Scholar
  6. Internship – One per Scholar per year
  7. Internship Final Report - Cisco – One per Internship
  8. Internship Final Report - Scholar – One per Internship
  9. Yearly Assessment – per Scholar (occasionally personal circumstances mean more than a four year period. They do not get more than four years of funding but there might be more than four years of assessments.)

Roles

  1. Cisco Administration
  2. Cisco Internship Supervisor
  3. University Advising Staff
  4. Scholar
  5. Candidate

Access models and view sets for each of these roles need to be completely defined.

Security and Integrity

Security: Obviously if student’s private information is present, it needs to be strictly firewalled by role and by identity along with any confidential notes that might be included in internship yearly assessments. There is also the question of protection from outside hackers. Both NCSU and Cisco have protections around their own internal information store systems. We will need to ensure that the security in this application can work seamlessly within either environment. An important consideration is that it might be difficult to precisely specify what Cisco does, except in the abstract, because of our own security constraints

Integrity: It is important to insure that bugs, various outages, or other kinds of events cannot corrupt site data. Interrupted sessions probably should be completely restarted, but, if it is straightforward, some context might be retained.

Reports, Displays, and Views in General

A View and Display model that minimizes the use of separate report forms is preferred. That is, the View itself should be able to be used and manipulated as the report through integrated sorting and filtering. This capability is open for discussion.

Branding

This means a mechanism for incorporating NCSU and Cisco references and logos. Ideally, if it is acceptable to NCSU, we would eventually like to port this application to our own website and use it for other university scholarship programs. While we might target NCSU and perhaps even bring it up on an NCSU site at first, it should be portable. This requirement probably means:

  1. Standard and portable internal interfaces to web servers, databases, etc.
  2. Packaged and decoupled references to University specific content.

Cisco would like to hear the designer’s views on these and any other aspects of the project, including suggestions for completely different approaches to filling the needs.

Engineering Lab Management System Database API

Create a web service front-end to the NetApp Engineering Lab Management System (ELMS) database. The ELMS database already exists, so database tables are already defined and examples of data are readily available. The web service must provide a configuration tool to assist users in configuring equipment racks in engineering labs, and a reporting tool that provides physical information (power requirements, weight, heat, rack space used, interface requirements, etc.) for systems in the database. The configuration tool must provide a graphical description of equipment racks and allow 2D visualization of placement of components.

ELMS is a mySQL database that contains asset information about all of the equipment in the engineering labs. It describes the manufacturer, model, and physical information about a device as well as interface and connection information. The ELMS database contains enough information to generate reports on power, heat, weight loading, used/free rack space, etc.

Currently, the existing user interface to the database is written in PHP/Cake. (Cake is a Model/View/Controller rapid-development tool.) Other interfaces to the database for various other functions are written in Perl, Perl/CGI, and Perl/DBI.

The tables, briefly described below, give an overall idea of information content of the database.

  • nodes - contains all attributes for each node, including parent/child relationships
  • node_types – expands node type characteristics, identifies data fields belong to each node type, and defines which nodes may belong to which other nodes, etc.
  • models – provides detailed model information
  • manufacturer – provides detailed manufacturer information
  • connections – provides lists of interface pairs representing connections

Example use case:

A user wishes to add a new server (let’s say IBM x325) to a rack. They request "add server," navigate to the Site/Bldg/Row/Rack and create a server and give it a unique name (usually the fully qualified domain name: ibmsrv1.rtp.netapp.com). Supplying the Manuf/Model information fills out the default data fields (height, width, depth, power method, etc.) as well as the "structure" of the device: e.g., 2 power supply interfaces (ps1, ps2), 2 ethernet interfaces (e0, e1) and 2 serial connections (serial0, serial1). The user updates attributes such as memory size (4Gb) and CPU speed (3.2Ghz), as well as the starting U position in the rack (height in U is derived from the model table). Once the server is created and linked into the hierarchy, the user then describes where the power supplies are plugged in. A similar procedure will specify a serial console connection (usually to a terminal server port located in the rack or near-by).

The user of the new system can then view a 2D representation of the specified configuration, obtain a report on power loading, etc.

WebSphere Developer for System z

How This Project Will Benefit You the Student

Many areas in industry today are still using the mainframe and the technologies used on the mainframe, such as COBOL, PL/I, CICS, DB2, and VSAM. Lately there have been reports in the news that mainframes are more cost efficient and secure than other competing platforms. Therefore these technologies will be around for several years to come.

With the idea that there has been a huge investment in the code base surrounding the mainframe, and it would be very costly to rewrite, the focus of these technologies today has been set on the reuse of the existing code base. Code developers are focusing on how to modernize existing assets by augmenting them with newer technologies such as web services.

This project will introduce students to existing technologies as well as a modern IDE, IBM’s extension of Eclipse, WebSphere Developer for System z. Working on this project will sharpen modern skills; Java, UML models, and team building. It will also allow you to differentiate yourself with marketable legacy skills that your fellow students may not possess. Come be part of a project that is on the forefront of a growing market.

Background

IBM provides products and support to Fortune 500 companies that use and develop applications in a mainframe environment. These customers typically have two main issues with developing applications on the mainframe. The first, the cost of MIPS1 on the mainframe, is perceived more costly than development on a workstation. The second issue is that mainframe development is more complex and less efficient than workstation development. This makes for longer, more costly, development cycles.

To accommodate these customers IBM has developed products like WebSphere Developer for zSeries (WD/z) which allows users to develop in an emulated mainframe environment. WD/z offers support for local and remote development of COBOL2 PL/I3 CICS4 and DB2 or VSAM (Indexed file) applications that are about 90% compatible with the mainframe environment. Most of the applications on the mainframe are authored in COBOL and PL/I. CICS and DB2 / VSAM are the most popular data access and storage mechanisms for applications on the mainframe. Two popular application models are Batch and interactive (CICS). WD/z allows for the development of either local batch or local CICS applications that access mainframe data locally, from either a CICS indexed(VSAM) file or DB2 database, written in COBOL or PL/I on the workstation saving the cost of mainframe MIPS in a more user friendly and productive Windows environment.

How does development differ from the mainframe and the workstation?

One of the biggest differences between the mainframe and the workstation is how they invoke jobs such as compiling, syntax check, and running/debugging programs. On the workstation the user defines the environment in a file or with metadata before invoking a command to perform a particular task. On the mainframe, however, there is Job Control Language (JCL5) that is used to define the environment and the tasks themselves (in Batch).

When a user wants to perform a Batch task on the mainframe, the user must create a script that is written in JCL. This script tells the mainframe how different tasks are to be performed, what resources to use, and in which order to perform these tasks. Upon successfully creating this script a user then submits the JCL file to the mainframe and the mainframe performs the tasks laid out in the script.

Problems

IBM would like to add two enhancements to their emulated environments. The highest priority enhancement is to add an emulated mainframe assembler environment, to an existing IBM Eclipse IDE Platform, in which a user can locally develop mainframe assembler. Upon development of the assembler code it should be able to be invoked from other applications such as a local COBOL Batch and CICS applications.

An option in achieving this goal would be to investigate the feasibility of an open source mainframe assembler compiler for the workstation. One example converts the assembler code into JAVA code and then runs this generated code on the workstation JVM, processing all the steps the assembler application was coded to do. There are other options as well, including other vendor technologies.

Another enhancement would be to add an emulated JCL environment. This enhancement to the workstation will allow users to develop local batch JCL files on the workstation with tools that will help in their development process such as syntax check. Once these files are created the user should be able to submit these to the workstation or remotely to the mainframe and have the various emulated runtimes on the workstation or the mainframe carry out the tasks assigned by the JCL file. The transfer of the data environment from the mainframe – or setup of the data environment will need to be taken in consideration to any proposed solution.

Solution Plan

An NCSU Senior Design team, with the cooperation/help of IBM, will evaluate and attempt to implement a solution to solve these problems. The team’s first objective will be to evaluate the open source mainframe assembler compiler and runtime in order to determine if it is possible to integrate this into IBM’s existing technologies allowing a user to develop mainframe assembler in IBM’s emulated mainframe environment. Other technologies may be evaluated as well.

The second objective, if the evaluated assembler compiler and runtime can be integrated, will be to prototype and implement a solution to integrate the assembler environment into IBM’s emulation environment.

The third objective will be to implement a solution that allows a user to develop and run batch JCL files on the local workstation that is also compatible with the mainframe environment. The last objective is to enable the emulated assembler environment to be able to access local mainframe data, index files and DB2 data.

***Note***
Due to the time constraints on the project and the uncertainty of how long each objective will require in order to successfully accomplish each goal, the project’s scope may be narrowed in order to accommodate these time constraints.

1MIPS: Million instructions per second, a measure of microprocessor speed

2COBOL is a third-generation programming language, and one of the oldest programming languages still in active use. Its name is an acronym for COmmon Business-Oriented Language, defining its primary domain in business, finance, and administrative systems for companies and governments.

3PL/I ("Programming Language One", pronounced "pee el one") is an imperative computer programming language designed for scientific, engineering, and business applications. It is undoubtedly one of the most feature-rich programming languages that has ever been created and was one of the very first in the highly-feature-rich category. It has been used by various academic, commercial and industrial users since it was introduced in the early 1960s, and is still actively used today.

4CICS® (Customer Information Control System) is a transaction server that runs primarily on IBM mainframe systems under z/OS or z/VSE. CICS is available for other operating systems, notably i5/OS, OS/2, and as the closely related IBM TXSeries software on AIX, Windows, and Linux, among others. The z/OS implementation is by far the most popular and significant.

CICS is a transaction processing system (like TCAM) designed for both online and batch activity. On large IBM zSeries and System z9 servers, CICS easily supports thousands of transactions per second, making it a mainstay of enterprise computing. CICS applications can be written in numerous programming languages, including COBOL, PL/I, C, C++, IBM Assembler, REXX, and Java.

5Job Control Language (JCL) is a scripting language used on IBM mainframe operating systems to instruct the Job Entry Subsystem (that is, JES2 or JES3) on how to run a batch program or start a subsystem.

Translation Tool

Fujitsu Transaction Solutions is one of the top three suppliers of retail systems and services worldwide. Its integration of Microsoft DNA" (Distributed InterNet Architecture) creates a high performance yet open platform that retailers as diverse as Nordstrom and Payless Shoe Source are able to customize.

Currently, Fujitsu’s translation solution dumps text that needs to be translated into a text file. The translator needs to be educated as to how to manipulate the file so it can then be reprocessed for a language. Fujitsu is looking for a tool that that will take as input the current text file and present the translator with an intuitive UI to error proof the process.

The team will use state of the art development tools such as Visual Studio.Net. The team will build on an existing design and implementation for a translation tool. The translation tool will be used by customers of Fujitsu Transaction Solutions to support their global deployment of products.

(Technologies Used: C#, Visual Basic.Net, Windows XP/2000)

MP3 Profiler

Background

Last semester’s Concert Technology senior design team built software that can profile an MP3 collection on a client's hard drive and report that metadata to a centralized server for storage in a database. The solution provides a downloadable scanning client, a server for receiving metadata information, a web server for providing browser-based functionality to manage and access the stored information, and a web service for programmatically accessing the managed information.

Project Overview

This project will expand upon the SDC Fall 2006 music cataloging project by adding a capability to analyze the contents of users’ music databases. This will require the development of a generalized mechanism for experimenting with algorithms that analyze the contents of a music database. This mechanism should include the following components:

  • A web page for submitting an arbitrary javascript formula that defines the categorization algorithm being submitted.
  • A servlet that builds a javascript DOM of relevant database objects, executes the script, and extracts the results.
  • A graphical results web page that displays the output results of the executed formula.
  • An error page describing javascript execution errors, if any.
  • A generalized plugin-style mechanism for easily adding new algorithm categories on the server side, consisting of:
    • A well-defined Java Interface declaring the functionality of the java plug-in classes.
    • Dynamically loaded Java classes implementing that interface to build the correct DOM and extract the correct result set.
    • A JSP or other servlet results page to present the results in a category-specific way.

The first formula category to be developed utilizing the plugin-style category mechanism will handle formulas that calculate a virtual distance between users based on the contents of their music libraries. The result graph for this result set will be similar to the 2D distance graphs shown in some atlases.

Peer-to-Peer 3D Data Management

Background

Peer-to-peer (P2P) technology, according to ACM TechNews [1], has the potential to deliver “faster connectivity and development, more creative product differentiation, and more potential for e-commerce.” Motorola’s chief software architect John Waclawsky writes that future Web-based innovation and development will center around peer-to-peer overlays.

The P2P 3D Data Management project team will develop a simple P2P system that will manage 3D Data stored in I-Cubed’s Portable Project File Briefcase. This prototype will demonstrate simple query features of the P2P system.

Portable Project File (PPF) Briefcase is a prototype whose initial development comprised the Senior Design Project sponsored by I-Cubed last year. A PPF Briefcase is an encapsulation of 3D Computer Aided Design/Drafting (CAD) documents inside of an Adobe PDF for the purpose of enabling rapid and secure collaboration and communication of high-value engineering data between a project team and external resources or contractors. Each PPF file typically includes a graphic representation of its enclosed CAD files, along with important data about those files.

Product Data Management (PDM) and Product Lifecycle Management (PLM) systems commonly provide a means to manage large amounts of complicated project data through server-centric version control and document storage.

There are several scenarios in which PPF Briefcases must be managed outside of a traditional, server-centric PDM or PLM systems. An example would be the need to send data offsite where document management systems may be incompatible or nonexistent.

Project Requirements

Students will begin by addressing the need to use the PPF Briefcase model to manage data onsite in a lightweight peer-to-peer document management solution.

This project will require students to participate in the design and implementation of such a document management system by developing a peer-to-peer client capable of communicating with clients on different machines in a local network environment. Initially this client must be able to:

  • Query the network for a list of managed PPF Briefcases.
  • Identify the location(s) and latest iteration of a given PPF Briefcase.

Enabling Technology

Technology developed by I-Cubed for extracting metadata from a CAD model and creating a PPF document will be used, and students will be responsible for adding version and permission information to this document for use by the peer-to-peer management system.

P2P network technology utilizes the computing power and bandwidth of individual network participants rather than a small number of dedicated servers in order to publish and transfer data.

Adobe eXtensible Metadata Platform (XMP) is an XML labeling technology for Adobe’s PDF file format; XMP allows one to package compressed metadata (data about a file) in the file itself.

Resources

[1] ACM TechNews (http://technews.acm.org). Oct. 30, 2006 Issue: “P2P: The Next Wave of Internet Evolution”

Web Security

The students will develop a "security auditor" that will allow users to document their security requirements. The application will crawl through a defined list of servers and document how our actual settings compare with our requirements. Users must be able to dynamically enter requirements and the list of target systems. In addition, the students will perform a Cyber Vulnerability Assessment as described by NERC guidelines. At a minimum, the assessment must include a review to verify that only ports and services required are enabled and a review for default accounts. Thorough documentation is essential.

Underwater Ad-Hoc Wireless Networking

Project: Develop software related to NCSU Autonomous Underwater Vehicle (Seawolf) (Student Club).

Background

The NC State underwater robotics team has developed an unmanned, autonomous underwater vehicle, Seawolf, that is capable of performing a large set of autonomous tasks in an arbitrary underwater environment (www.ncsurobotics.org). For example, the vehicle may be asked to find a small colored light somewhere in a 300’ tank, starting from any location. To accomplish this, the vehicle executes a search algorithm, relying on the accuracy of vehicle sensors (attitude, depth, heading, collision, optical) and physical modeling to accomplish the task. To be sure the vehicle is actually doing what it was commanded to do requires reliable control of the vehicle and real-time feedback of sensor data. Loss of contact, leaks, collisions, sync issues, and power loss all contribute to failures that could occur. The underwater robotics team has relied on hardware dependent direct connections to the vehicle for status and position communication, but this becomes logistically difficult in a testsing environment (the vehicle must drag a wire around!). Development of a hardware independent communication API will be a good step in the direction of non-contact communication (like some form of wireless or acoustic).

Another goal of the project is to develop a GUI that will display the vehicle state and permit remote command of the vehicle for testing purposes.

Finally, a modeling framework must be developed to assist algorithm development.

Summary of project goals

  1. Develop and document a communications API to receive vehicle information and command vehicular operations.
  2. Develop a reference GUI that receives and displays the vehicular information and allows the pilot to command the vehicle.
  3. Develop a modeling framework to compare algorithms developed for search and task patterns for efficiency, allowing for the most efficient algorithm to be recognized.
  4. Determine the most efficient algorithm for a given vehicle task (i.e. find the light).

Issues

The NC State underwater vehicle is an actively used asset, so integration will be a logistics and scheduling issue. It will be important to develop the communications API up front and in conjunction with representatives from the team. It may then be necessary to develop a simple vehicle simulator to permit off-line development of the communication API vehicle status display.

The real environment for the vehicle is in the water (i.e. the pool at NC State), and it is important that the interfaces be tested in the operating environment. Due to logistics issues with the pool staff and time constraints of robotics team members, it will be necessary to plan accordingly to perform in-water integration and testing.

Participants and interaction requirements

There are four parties involved in this effort: Senior Design team, Underwater Robotics Team, CSC SDC staff, Northrop Grumman. In order to keep progress and integration in sync, we should communicate weekly via email. Teleconferences can be setup to deal with larger issues. Northrop Grumman will plan to meet on campus three or four times throughout the semester to review progress and deal with any necessary issues.

Network Virtualization I

Network virtualization is the ability to manage, selectively isolate, protect, and prioritize traffic on a network. There are many technologies that enable this type of management. They include virtual private networks (VPN), virtual local area networks (VLAN), virtual internet protocol (IP) addresses (VIPA), network address translation (NAT), name-based addressing (e.g., DNS), and so on. Of special interest in this project are end-to-end aware networks, i.e., networking environments which are not only transparent to end-users, but are also aware of end-user applications and can make applications network aware. This capability can facilitate the quality-of-service offered to those applications. One effort in that direction is the Enlightened Computing project (http://enlightenedcomputing.org/index.php). This is a global alliance of partners that include MCNC, Louisiana State University, North Carolina State University, G-Lambda (Japan), PHOSPHORUS (Europe) and a number of industrial partners, e.g., Cisco, Calient Networks,and IBM. Core technology of this project is all photonic. Tools and applications will be deployed over 10 Gbps links that use GMPLS to provide control plane management. This semester, a specific test-bed activity of interest is the establishment of a High-Definition (HD) 1.5+ Gbps interactive video streaming demonstration. This will utilize a 10 Gbps light-path between LSU and NC State (via MCNC). The stream will deliver a real LSU course on High-performance computing. Over the spring semester, the project goal is to migrate the associated HD technology that delivers content to MCNC to the NCSU campus (into Centaur Labs and/or EB2) via optical fiber between MCNC and NCSU.

Students will participate, along with MCNC and other partner engineers, in the network engineering and end-to-end implementation of the HD link, video and audio streaming, and its performance evaluation. As part of the project students will also set up, operate, and evaluate technologies associated with less network-intensive video delivery, such as webcasting/podcasting (video and audio), and Grid Access nodes (video and audio). Students will also develop and demonstrate appropriate end-user and technology-related quality of service metrics and assessment tools, and provide full reports on both technology set-up and performance.

A subset of project goals and tasks will be assigned as Project I.

If you have questions about this project, feel free to contact NCSU CSC graduate students:

  • Ms. Lina Battestilli: lina AT mcnc.org
  • Ms. Xenia Mountrouidou: pmountr AT unity.ncsu.edu
  • Ms. Claris Castillo: ccastil AT ncsu.edu

Network Virtualization II

Network virtualization is the ability to manage, selectively isolate, protect, and prioritize traffic on a network. There are many technologies that enable this type of management. They include virtual private networks (VPN), virtual local area networks (VLAN), virtual internet protocol (IP) addresses (VIPA), network address translation (NAT), name-based addressing (e.g., DNS), and so on. Of special interest in this project are end-to-end aware networks, i.e., networking environments which are not only transparent to end-users, but are also aware of end-user applications and can make applications network aware. This capability can facilitate the quality-of-service offered to those applications. One effort in that direction is the Enlightened Computing project (http://enlightenedcomputing.org/index.php). This is a global alliance of partners that include MCNC, Louisiana State University, North Carolina State University, G-Lambda (Japan), PHOSPHORUS (Europe) and a number of industrial partners, e.g., Cisco, Calient Networks,and IBM. Core technology of this project is all photonic. Tools and applications will be deployed over 10 Gbps links that use GMPLS to provide control plane management. This semester, a specific test-bed activity of interest is the establishment of a High-Definition (HD) 1.5+ Gbps interactive video streaming demonstration. This will utilize a 10 Gbps light-path between LSU and NC State (via MCNC). The stream will deliver a real LSU course on High-performance computing. Over the spring semester, the project goal is to migrate the associated HD technology that delivers content to MCNC to the NCSU campus (into Centaur Labs and/or EB2) via optical fiber between MCNC and NCSU.

Students will participate, along with MCNC and other partner engineers, in the network engineering and end-to-end implementation of the HD link, video and audio streaming, and its performance evaluation. As part of the project students will also set up operate, and evaluate technologies associated with less network-intensive video delivery, such as webcasting/podcasting (video and audio), and Grid Access nodes (video and audio). Students will also develop and demonstrate appropriate end-user and technology-related quality of service metrics and assessment tools, and provide full reports on both technology set-up and performance.

A subset of project goals and tasks will be assigned as Project II.

If you have questions about this project, feel free to contact NCSU CSC graduate students:

  • Ms. Lina Battestilli: lina AT mcnc.org
  • Ms. Xenia Mountrouidou: pmountr AT unity.ncsu.edu
  • Ms. Claris Castillo: ccastil AT ncsu.edu

Software For Sun SPOTs

A Sun SPOT (Small Programmable Object Technology) is an experimental device developed by Sun. Sun SPOT hardware includes a powerful 32-bit processor, sensor capabilities, and an integrated on-board software controllable radio transceiver module (ChipCon CC2420 – IEEE 802.15.4 compliant). Sun SPOTS are small (< 55cc) and battery powered (3.7V 720 maH rechargeable lithium ion). A Sun Spot device is programmed using Java and has almost an unlimited potential for countless applications. Please see the following URL for a detailed description: Program the world! http://www.SunSpotWorld.com/

Sun SPOT kits are not commercially available at this time in the U.S. and have a very limited distribution. NC State is one of only a very small handful of universities experimenting with them.

The project for this semester is to investigate software development topics related to Wireless Sensor Systems and Networks. Specific topics include ad hoc routing in Wireless Sensor Networks (WSN’s), power management implications in WSN’s, security issues in WSN’s, applications of WSN’s based on available sensors (e.g., accelerometers).

Project details to be determined by the team in conjunction with CSC faculty and mentors from Sun Labs.

Nortel-Weston Ergonomics System: Phase II

Problem Statement

Ergonomics is the study of the interaction of humans with their environment, and the application of situationally correct techniques in order to optimize the well-being of the humans involved. Ergonomists contribute to the design and evaluation of tasks, jobs, products, environments and systems in order to make them compatible with the needs, abilities and limitations of people.

Weston Solutions, Inc. (http://www.westonsolutions.com) is a leading employee-owned environmental and redevelopment firm delivering comprehensive solutions to complex problems for industries and governments worldwide.

One of Weston Solutions tasks involves capturing ergonomics information about Nortel employees. This is currently a manual process that is not only time-consuming, but also requires a high level of human intervention and is therefore more prone to error. So, there is a need to automate this system using a centralized relational database for storing all relevant data and a web-based GUI for ease of user interaction with the system.

Design

The Ergonomics module contains the following kinds of information:

Nortel:
Site: Each site owned by Nortel, its address and location information

Employee:

  1. Employee Specific Information – Employee ID, Name, Contact Details
  2. Personal Information – Height, Medical History, Dominant Hand, etc.
  3. Workspace Configuration – The kind of work environment that the employee is in, such as the height of his/her desk, type of chair, monitor, mouse, keyboard, etc.
  4. Equipment – The kind of equipment that the employee uses, such as his/her telephone, headphones, etc.

Weston:
Prime – The Weston employee who is the prime for a particular site

Evaluation Form

Discomfort: Detailed information about the discomfort faced by each employee at his/her workspace.

Recommendation: The recommendations given by the site Prime to each employee for correcting any possible problems.

Implementation

The following technologies will be used for implementing the automated system:

  • Database: Oracle 9i
  • Web Server: Microsoft IIS
  • Front-end: HTML/DHTML, ASP with VBScript/Javascript

If you have questions about this project, feel free to contact NCSU CSC graduate students:

  • Ms. Aditi Mundle: mundle_aditi AT yahoo.co.in
  • Mr. Pravin Mehta: pkmehta AT ncsu.edu

Automatic Website Generation

Our company specializes in making medical data, in this case, data related to echocardiograms, available to anyone at anytime. The existing system is basically a web-based PACS (Picture Archiving Computer System) that permits related patient data to be archived along with actual echocardiogram video clips. These clips, accompanied by patient related data, permit the physician to observe cardiac functioning and record observations and/or a diagnosis. The proposed project is to create a customizable web interface to allow physician recording of data and observations of patient echocardiograms. iCardio staff will provide a spreadsheet of customization options. The task is to automatically generate a data entry and reporting website.

The choices for a user of the automatic site generation tool should include these operations:

  1. Upload a sample set of measurements.
  2. Design the patient demographics section.
  3. Design the study data section.
  4. Add, modify and remove report phrases.
  5. Save to database.

The existing database system is based on PHP 5 and MySQL 5 running on a Red Hat Linux ES4 system. The current technology is based on the Smarty template engine, but it is probably best to build the new system based on Ruby, Rails, and AJAX.

The justification for this system is that iCardiogram employees spend approx 15-20 hours per new client to set up a customized web-based PACS. The majority of this time is spent in template customization. The proposed system will position us above our competitors in the market place. Reducing the time to create a customized web site by automating template selection will save time and money for both the client and iCardiogram on future installations.

The first goal is to create a customization system useable by iCardio staff for new client setup. A second goal is to polish the automated setup process so it is possible for end users (physicians or their staff) to make the customization choices.

A demonstration of a sample target system is available on line. A demo username and password to the web-based system is included below. You will need the QuickTime player to view sample echocardiograms. Upload this free viewer software using the links below.

For Windows:

http://www.apple.com/quicktime/download/win.html/

For Macintosh:

http://www.apple.com/quicktime/download/mac.html/

The URL for the iCardio site is

https://icardiogram.com/

Username: demo
Password: ********

Once you login you will see a sample study list. For help on using the website go to

http://icardiogram.com/Support/HowToReadReview.pdf