Computer Science

Senior Design Center

Projects – Fall 2002

Click on a project to read its description.

iSCSI is Internet SCSI (Small Computer System Interface), a new Internet Protocol (IP)-based storage networking standard for linking data storage facilities, developed by the Internet Engineering Task Force (IETF). By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. The iSCSI protocol is among the key technologies expected to help bring about rapid development of the storage area network (SAN ) market, by increasing the capabilities and performance of storage data transmission. Because of the ubiquity of IP networks, iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval.

When an end user or application sends a request, the operating system generates the appropriate SCSI commands and data request, which then go through encapsulation and, if necessary, encryption procedures. A packet header is added before the resulting IP packets are transmitted over an Ethernet connection. When a packet is received, it is decrypted (if it was encrypted before transmission), and disassembled, separating the SCSI commands and request. The SCSI commands are sent on to the SCSI controller, and from there to the SCSI storage device. Because iSCSI is bi-directional, the protocol can also be used to return data in response to the original request.

The goal of this project is to develop a user-level library which semi-transparently replaces standard open/close/read/write file functions, but implements the whole thing by building iSCSI requests in user space and submitting them over a socket.

The library would export 5 functions, something like:

  • iscsi_handle = iscsi_open(target_ip, target_port);
  • error = iscsi_read(iscsi_handle, buf, count);
  • error = iscsi_write(iscsi_handle, buf, count);
  • error = iscsi_lseek(iscsi_handle, position, whence);
  • iscsi_close(iscsi_handle);

The idea is that any raw I/O-style application (sio, Oracle) could be modified to replace the standard open/read/write/close/lseek calls with these library calls. Then we could run the application directly to an iSCSI target, without depending on any initiator-side driver, file-system, HBA, etc.

Phase 2 could add asynchronous I/O capability: The library, now multi-threaded, would have to support queued submission of SCSI requests, and out-of-order completion. When the I/O completes, the completion status would be returned asynchronously via the notify/callback function.

A Network Attached Storage system, as seen below, has a front end that takes network requests for files, converts those requests into storage lookup requests, passes these requests onto the storage back end, and returns the data in the proper protocol for a request. The storage back end takes the request and performs the necessary read and write operations to retrieve or write the data. The back end also takes care of any data restructuring (for example RAID, striping, etc.). The cache memory is used as an intermediary between these two subsystems.

The overall performance of a NAS device can be impacted by the speed of the front or back end. In fact, there has been much historical work done evaluating the performance impact of changes in the front and back end. The cache memory, however, is less understood in this environment. Cache memory is used to serve as a pass through medium from the front to the back end. It allows the front end to save a write request quickly while waiting for the back end to put the data physically on disk. It also allows the back end to pre-fetch and hold data from the back end in anticipation of this data being needed by the front end. Cache memory, in and of itself, is a well known quantity. However, when put into this architecture there are many variables between front, cache, and back end behavior that do not allow us to predict how well overall system performance will behave as there are changes in the configuration, technology, and software algorithms.

This project will work out the design for a software model of the cache memory in this environment, provide controls that allow for a wide range of variable changes, validate the base operation of the model, and use the model to predict the effect of some architectural changes. Specifically, this team will define the inputs/outputs for such a model, recommend an appropriate modeling strategy, implement the S/W model, validate the model against empirical results on real test rigs, and evaluate some architectural configuration changes and make performance recommendations.

WebWorks is a part of the Information Systems Department within John Deere’s Commercial and Consumer Equipment Division (C&CE) that develops internet, extranet, and intranet applications for John Deere. The WebWorks group acts as a set of internal consultants to the Division. WebWorks focuses on providing cost effective solutions written using solid industry techniques and open standards. WebWorks is a small group of programmers practicing Extreme Programming.

Project Description: Parts Marketing Competitive Intelligence

The proposed system is an interactive, online data collection tool that will enable the C&CE Parts group to collect competitive/business intelligence on the parts industry. In doing this, the Parts group can be proactive, versus reactive, and position themselves as the #1 customer choice for both OEM and all-makes parts. This will be a new system, since there is currently no uniform process with the C&CE division for continuously monitoring competition.

The objective of this project is to create a tool that allows users to collect and report business intelligence data. Each data entry will be assigned a particular market channel (i.e., Mass, Internet), competitor, and data type. The user will also have the ability to include a link to an external document or attach an external document, in addition to providing a complete description. This project also includes a reporting side that displays summary and detailed intelligence data. The users will have the ability to search and filter for these reports.

The application should be written using Java, utilizing JSP (Java Server Pages) and Servlet technology, and a JDBC database. A thin client architecture should be used so users are only exposed to HTML and JavaScript. The servlets communicate with the domain, a layer of business objects. The domain communicates with the databases through a level of database brokers. The brokers must utilize JDBC to send data to and from the database. WebWorks is an XP environment. We suggest that the Extreme Programming methodology be followed when developing this application:

  • Plan and scope out each release. Prioritize tasks.
  • Plan small releases – every release should be as small as possible and should contain the most important business requirements. Release often. Follow a Simple design.
  • Test cases should be developed that can be used in acceptance testing. Test first coding can be used to integrate Unit testing into the design. (JUnit)
  • Use Pair programming and assume collective ownership – anyone can change any code anywhere.
  • Coding standards – document code, format code so it is easy to read and follow.

Test Request Process Automation

The Network Quality Lab (NQL) test team provides a testing service for hard and soft modems developed by the Voice Band Modem (VBM) group. The current process of requesting this service is very manual and time consuming. This process also leaves many gaps for possible human error and issues in translating wants and needs into tests and results.

As part of the manual test request process, there is also a very manual process for tracking the current status of the requests within the NQL team. Tracking of estimated completion dates, current pass/fail results, and defects within a project and across projects is time consuming and it is not always convenient to view.

The NQL team would like a web front end to automate and simplify the processes, using a database back end to store the data. The skills required to be successful at implementing this type of system are web interface design, ASP or other web application language programming, database design and implementation (preferably SQL but not required), and written communication (to be able to clearly document requirements, design, and user documentation). Students must be able to work well as a team, dividing the work amongst themselves with little supervision from the Intel sponsor.

Test Results Automation

The Network Quality Lab (NQL) test team provides a testing service for hard and soft modems developed by the Voice Band Modem (VBM) group. Part of our test suite includes emulation of phone line impairments using equipment known as a TAS. The TAS equipment generates a log file from the each test run that is then hand-edited and copied in Excel. Not only is this process manual and time-consuming, but also we have many Excel files quickly taking up space on our shared drive.

The NQL team would like a web front end to automate and simplify the processes, using a database back end to store the data. This requires parsing the log files, storing the data in a database, and being able to generate similar graphs to those we currently have in excel. Though we are currently targeting only our TAS testing, the design of the application would need to allow for extension to all our other testing in the future.

The skills required to be successful at implementing this type of system are web interface design, ASP or other web application language programming, database design and implementation (preferably SQL but not required), and written communication (to be able to clearly document requirements, design, and user documentation). The team of students may decide to write the parsing and graphing code in a language other than ASP, meaning they would need experience in designing and programming in that chosen language as well. Students must be able to work well as a team, dividing the work amongst themselves with little supervision from the Intel sponsor.

Using Progress Energy's third party software (OSI/PI), develop and provide to Energy Supply generation facilities real-time process monitoring graphics and automation tools. These graphics and tools will integrate existing plant data obtained from various Digital Control Systems and Process Monitoring Systems currently used to support operational generating facilities.

This effort will use industry standard I/T software to develop the necessary graphics and automation tools. This work requires integration with existing real-time data systems. The ability to logically assess real-time data sources and interface with Engineering personnel are necessary for success.

Access for the student team will be provided at the Progress Energy office in downtown Raleigh. To work on this project, students must be willing to spend some time at the Progress Energy site.

Network Management Test Automation Extension

This challenging project involves testing the core technology used by several Cisco Systems’ network management products. Network management technology must be tested thoroughly prior to shipping to customers. Cisco has already automated a basic test suite. The focus of this project is to extend this automated test suite. To accomplish this, there are two major tasks (as time permits):

Task 1 - The first task requires development of a device interaction framework, which can query various Cisco devices via SNMP and IOS to obtain values for certain attributes. These values should then be compared with network management application for validity.

Task 2 - The second task involves development of test scripts to verify that a given incoming SNMP trap burst is converted to alarms correctly in a network management server.

Currently, the DataFlux web site is merely an electronic brochure where our prospects, customers, and partners come to find out what’s new with DataFlux. As DataFlux creates more in-depth relationships with our customers and partners, a new level of expectations and services are required. Due to this evolving model the DataFlux web site is going to be redesigned into three separate, yet equally important sections.

  1. Prospect Section- This is very much the web site in its current form. All product data will remain; the only major addition will be ‘cookies’. A prospect area will be created where a visitor can register and view DataFlux white papers, methodologies, etc. The prospect can then return to the DataFlux web site in the future to browse additional resources without having to sign in while giving the DataFlux marketing department the ability to see which pages our prospects like and don’t like.
  2. Customer Section- This will be a customer only section, where customers can log on (with their customer data and some verification information) to view FAQ’s, download any patches, and update their information. Registration codes will be available for the software packages they have licensed. A technical support section will be included here as well for online submission.
  3. Partner Section- This section will be very similar to the Customer section with minor enhancements geared toward our Partners.

In a nutshell, our desire is to take our static web site and make it a very personalized experience for our customers. By making these front and back end changes, we will improve our external web site interaction with our internal sales and customer databases.

This is only a brief overview of the ideas we have. More detail is available, as needed. Students should use their imaginations and creativity to take these basic concepts wherever they see the potential.

Transaction Performance Monitoring in Java

In this project students will instrument a Java Virtual Machine (JVM) to monitor method invocation and exit events associated with each client request, noting performance information for later analysis. As time allows, a reporting software may also be developed to correlate events with specific client requests, summarize performance data per request, client, or workload, and reconstruct the execution path. The performance impact of this instrumentation can be no more than 5%.

Fujitsu Transaction Solutions is one of the top three suppliers of retail systems and services worldwide. Its integration of Microsoft DNAÔ (Distributed InterNet Architecture) creates a high performance yet open platform that retailers as diverse as Best Buy and REI Sports are able to customize.

In distributed applications, such as Point-of-Sale Systems, problem management can be a significant challenge. When in-store applications fail due to programming errors, data corruption, hardware failure, etc., local personnel typically lack the expertise to diagnose the problem or even interact in an efficient manner with call center tech support. The goal of this project is to create a simple, efficient mechanism to capture and report diagnostic information in the event of a system failure.

This semester’s project is a continuation of a Spring 2002 project that took initial steps to solve this problem. Two tools have been defined. The Capture Diagnostic Information (CDI) tool is a standalone Visual Basic program to be used by development, quality assurance (QA), and in-store personnel to capture diagnostic information. The Report Diagnostic Information (RDI) tool is a standalone Visual Basic program to be used by development personnel. The RDI tool provides reports to assist developers in the diagnosis and repair of problems.

This project involves the following technologies:

  • Visual Basic
  • Windows 2000/XP
  • The Windows Registry
  • COM
  • WMI (Windows Management Instrumentation)

Supplementary tutoring on these technologies will be provided as necessary.

The US Environmental Protection Agency is interested in raising public awareness of the dangers involved in sun exposure. There is a large body of scientific evidence that too much sun exposure, especially at a young age, can have drastic negative effects later in life. Depletion of the ozone layer exacerbates this problem. This project is a continuation of a three-year senior design effort to develop computer based systems that contribute to raising this awareness.

The goal of this semester’s project is to design and implement a computer game to illustrate to children ages 7-11 the effects of sun-exposure. The basic idea that has been discussed (which can be used or not) is a 3rd person game looking down on a “baby” playing in the sun. The player guides the baby to pick up treasures and/or get to a goal location (i.e. the umbrella where “Mom” is) while the sun beats down. Over time the baby becomes more and more exposed to the sun (shown visually by coloring the baby in various shades of red). The sun exposure can be reduced in the game by having the baby put on a hat, slather on sunscreen and the like. The overall idea is to illustrate sun-savvy behavior that would include the effects of covering up, using sunblock, etc. The game should provide simple play for younger kids with lots of rewards to reinforce basic concepts. This game can be done in 2D or 3D.

Team requirements to work on this project:

  • JAVA programming experience
  • Graphics experience (to produce 2D images in Photoshop and the like)
  • (Optional) Sound / Music experience
  • (Optional) 3D Programming / Wildtangent experience (for a 3d based game)

The project can include the following optional items:

  • Multilevel gameplay, different levels to work through and progress past; multifaceted gameplay, actions in the game that can be played or ignored. Multilevel/multifaceted characteristics allow younger kids to play the basic game and older kids to have a more challenging game to play. For example, a simple feature would be to collect treasures, and a more complex feature would be to find all treasures when a few are hidden
  • Attractive graphics
  • Sound and/or theme song (custom would be better)

The goal of this project is to deliver near real-time power plant performance data in a graphical format to Duke Energy’s mobile cell phones. The information will be used to manage the plant assets when personnel are remote from the asset control rooms.

The relevant plant data for this project (power output as instantaneous megawatts) is already being collected and published as an XML document every 30 seconds to the Duke Energy MQ-Series message bus as part of an existing project (i.e., the dashboard project). This student project will create an additional message subscriber to the external message broker, thus providing a data source. The data should then be persisted (in a 15-minute cycle) and displayed to the cell phones upon request.

Business Drivers/Benefits:

Drivers for this project are customer (manager or engineer) access to asset management information when mobile (away from the asset control room). The target is to permit business decisions for an asset regardless of a manager’s or engineer’s location. Additionally, a target is to reduce the number of devices a manager / engineer is required to carry to have corporate data access. The digital cell phone (J2ME capable) is now very common at Duke. Successful access to data in this format should make it possible to carry only one mobile device.


  1. Cell phone application should be delivered as a J2ME component
  2. Data display should include at least a trend plot of point data history (15 minutes). As an example, when requested, a manager should be able to select plant Lee MW output and get returned a 15-minute trend of the plant’s power output. Potentially include bar charts for displaying multiple instantaneous point data
  3. Message subscriber, and other server based components should be implemented as J2EE components. These will run on a server in Duke’s DMZ (BradNet@ test complex). Message brokers, etc. will be provided as part of the DMZ complex
  4. Any personalization of the application should be accessible via a workstation browser and implemented on the phone (i.e. not dependant on only the phone for personalization)
  5. Issues of security for the delivered application will need consideration. Wireless application security is being pursued as part of other Duke projects and should hopefully solve this issue. This project application should be flexible enough to permit addition of Duke’s wireless security interfaces as they are developed
  6. Nextel cell phones will need to be provided. Appropriate phone contracts will be arranged for voice (communications between project developers and sponsors) and data access

Add Support for Verilog and Visual Basic to Visual SlickEdit

The goal of this project is to add support for Verilog and Visual Basic to Visual SlickEdit.

Verilog is a language for embedded systems. Visual Basic is a general purpose programming language invented by Microsoft.

Support would include syntax coloring (e.g. keywords, literals, etc.), syntax expansion, smart paste/indent (indenting text according to nesting rules) and symbolic tagging (recognizing variables, functions, etc. for easy look up). Visual SlickEdit has a well-defined infrastructure for adding new languages with plenty of sample implementations. However, it would be helpful if someone on the team had experience with basic parsing concepts.

Some of the work for Visual Basic support has already begun.

SlickEdit will provide needed technical documentation.

The programming languages used to implement this project are C/C++ and Slick-C (Visual SlickEdit's macrolanguage).

Sales and Competition Database

Hatteras maintains a database of customer contacts and progress, as well as a database of competitive companies and products. The current database is rather primitive in design and function. This project would be to design and implement a new database and database front end for tracking customer progress and customer information. It is also desired to integrate a web robot into the database which probes a specified collection of news sites for news releases by the competition and retrieves these into the database, providing notification of new entries.

The database is to be designed in standard SQL to be portable into different relational databases including mySQL and Oracle. The database front end is to be web-based, but the design team has the flexibility to implement in the back end in Perl, servlets with JDBC, or other technologies. The front end is to be designed for both flexibility and ease-of-use for non-technical users.

The design team would be responsible for interviewing the sales and marketing organizations to determine the most common input, search, and notification operations required for the database, and designing the database and front end to provide this functionality.

Create an open source replacement for Meeting Maker, with accessibility in both text mode and GUI web browsers. Meeting Maker is used to reserve rooms for meetings, invite attendees, etc. The new Meeting Maker should be browser based, using Java web Programming and a back end database.

Redesign the departmental web site with emphasis on usability and efficient retrieval of information. The new web site should address the following features:

  • Preserve all existing functionality and information
  • Search by topic
  • Database driven
  • First priority should be given to Graduate Program pages and expansion of Graduate Program database, per Director of Graduate Programs (Dr. Edward Davis)

Improvements to the site are expected to evolve through extensive usability studies and development of additional functionality. We expect this project to continue over several semesters. This is the first semester, so a special emphasis should be placed on creating particularly effective documentation, suitable to pass onto a continuing team.

QVS develops solutions for the retail Point-of-Sale marketplace. QVS products are mostly “middleware” that fit with older, proprietary systems. QVS customers typically have a huge investment in their Point-of-Sale application and hardware. QVS helps them keep these older components around while at the same time allowing them to add on new technology when that technology provides appropriate ROI.

Many retailers today have text-based Point-of-Sale applications whose primary display is a 2-row by 20-column text display. Most of these applications expect input from a special 50-key Point-of-Sale keyboard. QVS customers would like to migrate toward a GUI interface, possibly with touch screens. The trend in our environment is for this GUI to be built as a “stand beside” application. The GUI code actually monitors device output of the original application (2x20 display and Point-of-Sale printer) and it is able to send in ‘keystrokes’ through a back-door interface to the keyboard driver.

QVS has developed about 8 different flavors of these GUIs which are mostly hard-coded to specific platform and customer requirements. QVS is in the middle of a new project to design and build a new GUI system which is an OS/Language/Application independent method for defining and scripting client GUIs. The new system is based on XML input.

QVS has a working version of this new tool, but is proposing an additional “Builder” IDE. This IDE would have the following components:

  1. Custom Visual Editor (Screen Builder)
  2. Custom Debugger (POS Simulator)
  3. Custom Text Editor (Logic Editor)
  4. Possibly an Automated Tester

The output from the “Builder” IDE is an XML-based script that defines the UI and some of the logic of an application.

This application could be built using Microsoft Tools or Java. The desktop target for QVS and its customers is Windows.

Major Java computer programming based development project (with minor C programming) of a distributed network manager for 802.11 wireless networks. The manager must be written in either Java or C. It has to a) collect parameters from all control interfaces - including remote clients (from application level, middleware, hardware, operating system, networking layer, and wireless parameters, etc.), b) run as daemon/service, c) perform setting, analysis and decision making based on the parameters, d) have a decision algorithm that is reactive and proactive, e) have a second stage decision algorithm that is predictive, f) must run on Linux, but be designed to be adaptable to a windows environment. g) Finally, course participants must perform testing and evaluation of the management system, demonstrate its operation and provide high-quality software engineering documentation for the project.

Project Archives

2017 Spring Fall
2016 Spring Fall
2015 Spring Fall
2014 Spring Fall
2013 Spring Fall
2012 Spring Fall
2011 Spring Fall
2010 Spring Fall
2009 Spring Fall
2008 Spring Fall
2007 Spring Fall Summer
2006 Spring Fall
2005 Spring Fall
2004 Spring Fall Summer
2003 Spring Fall
2002 Spring Fall
2001 Spring Fall