Project proposals will be due Monday, April 1, 2002
Projects should be executed in groups (2-4 people),
unless there are special circumstances which are cleared
with P. Cook before the proposal deadline.
Proposals must be a web page containing the following components:
Project title (this can be preliminary)
Project participants (include major and class, and what skills/ expertise each group member will bring to the project)
Summary of project: two pages of text minimum, plus sketches or figures depicting the important architectural components, proposed interfaces, etc.
Proposed timeline to completion. Be fairly detailed here. One-week granularity suggested.
Bridge the great digital divide of Olden St. Make a project that convincingly does something interesting between the two labs. Cameras to register and enhance the sense of presence across the street (and asserting without proof that this could work across greater distances), haptics (joystick and/or mouse) to aid communication, etc.
Wavelet-encode a YUV stream so we can ship it over the network; we've done other work on adapting the layers when we encounter congestion. We have a wavelet encoder for reference, but it's not quite what we need (for one thing, it needs to read an entire file in rather than encode on-the-fly). To make this more interesting, I have an idea for adding FEC codes to the stream, which we need because we purposely lose packets on the network in an effort to find the available capacity.
Network Routing Project, details coming soon.
Investigating the relationship between wireless communication signal strength and error rates for the purpose of optimizing power consumption.
A weak signal strength may result in less power consumption at the expense of higher error rates. Some errors can be corrected by software (in the form of retransmission, or even as error correction code built in at the software protocol level) so 0% error might not necessarily always be the best alternative. In other words, it might be desirable to find a signal strength level that leaves a non-zero error rate that is still best for power. In other words, software correction might be more desirable than hardware perfection.
We might want to do this dynamically so as the distance of a pair of communicators change, we constantly adapt.
In the context of ad hoc network routing, this might be even more complex. If the signal strength is high, I might be able to finish an end-to-end communication with, say, 2 hops on average. If I reduce the signal strength, I might have to go through 4 hops on average. What's the optimal signal strength for the optimal number of hops resulting in optimal power consumption? I'm pretty sure people have looked at how big a radius you need to make an ad hoc net functional at all but I'm not sure whether people looked at the power angle.
If the ideas make sense, we can do it for real on 802.11 cards, whose signal strengths are tunable. So far, we haven't investigated whether signal strength on the Bluetooth cards that we have are muckable.
a) today's high-end handhelds have processing and memory capacity close to the servers of a few years ago. It would be interesting to see what sorts of server software works well on these machines, since ad-hoc environments may place extra burdens on one of the group members. Goal: characterize the performance and power consumption of various bits of server software and see if it's possible to do high-end collaboration without a fixed base station.
b) when bandwidth is cheap and processing is plenty, compressed data is assumed to make sense. Play with various compression formats, including gzip with different levels of aggressiveness, bzip, and even image formats (gif, jpeg, progressive jpeg). Does the tradeoff really hold when the decompressor is nontrivial? What about when the handheld is the entity sending the data - can you still do compression in a way that makes sense? Namely, is the speed of the compressor greater than the transmission gains from sending compressed content?