Artificial
Saraswati
Interdisciplinary Ph.D. Research
Project Proposal
For Degree: Intelligent Music and Media
Technology
Departments: Computer Science, Mechanical
Engineering
Electrical and Computer Engineering,
Music & Psychology
by,
Introduction/Overview:
In
the Hindu religion, Saraswati is known as the Goddess of music and knowledge.
In this proposal I will describe a Ph.D. project proposal to create a knowledgeable
system for artificial intelligent music performance. The goal of Artificial
Saraswati, is to have a musical robot, perform on stage, reacting with a
human musician. This project will draw on knowledge from many disciplines:
music, computer science, electrical engineering, philosophy, neurological
psychology and genetic algorithmic studies. In this proposal I will describe
the key areas of research and development which must be accomplished to make
this project a success. Below is a list
of research and development that will be discussed:
Artificial
Saraswati hardware:
Artificial
Saraswati is the name
of the musical robot which will be constructed during this project. She will be
a stringed instrument which can acoustically create sound using motors and
robotic technology.
Proposed
Research\Experiments:
Design
Considerations:
Artificial Saraswati software:
This
software will act like a musician brain as it reacts to what it
"hears" (transmitted sensor data) a human performing with digital
instruments such as the Electronic Sitar (ESitar), Electronic Tabla
(ETabla), and Radio Drum, and sends control signals to the robot
to create acoustic sound.
Proposed
Research\Experiments:
System
Design Considerations:
New
Instruments for Musical Expression:[6]
New
instruments for musical expression will capture human gestural information
enabling a performing musician to create a new genre of music and media with
the aid of software, as well as enable Artificial Saraswati to capture
data about the human musician's playing in real time.
Proposed
research to create new controllers:
The Electronic Sitar
One
controller will be an Electronic Sitar (ESitar), which will
enhance the performance of a real sitar (a traditional 19-string instrument of
The Electronic Tabla
Another
controller which will be designed is the Electronic Tabla (ETabla). The
Tabla is a traditional set of drums of
As
of now, the ETabla uses force sensing resistors to trigger sounds and
graphics. The new goal would be to upgrade the ETabla to use TacTex
Controls, Inc., pressure-sensitive touch pad which obtain sensor data in the x,
y, and z axis, to get a better gestural response. [8]
For
more information on the ETabla visit:
http://www.cs.princeton.edu/sound/research/controllers/etabla/
The Radio Drum [9]
I
will also help in the development of the Radio Drum developed by
Professor Andy Schloss and others. The Radio Drum consists of two parts:
a rectangular surface ("drum") with embedded antennae, and two
transmitters embedded in conventional sticks ("mallets").
Proposed
research and development:
·
Help
redesign the Radio Drum to obtain the response characteristics of a fine
acoustic instrument, by sampling and processing the analog gesture signals.
·
Help
add wireless sticks, which broaden its possible use in popular music and other
applications, including as a conducting device for karaoke. [10]
Media Software Design:
Real
Time software must be designed to take control signals from the ESitar,
ETabla, and Radio Drum as well as from Artificial Saraswati to
produce a multimedia based entertainment experience. Control signals will
trigger both sound and performance based visual graphics.
Sound Control:
Research/Development:
Artificial Saraswati ChucK or Marsyas:
A
ChucK or Marsyas will be written to control sound processing of the performance
of the ESitar, ETabla, and Radio Baton. Features should include:
Visual
Control:
Research/Development:
2003 July –
December: Build
Electronic Sitar Controller
2004 January -
April: Audio Feature
Extraction
Introduction
to Music Information Retrieval
Digital
Audio Effects
VICON
Gestural Capturing at
Digital
Signal Processing
2004 May – August: Affective Computing
2004 September –
December: Introduction to
Machine Learning
Acoustics
of Musical Instruments
Introduction
to Marsyas
2005 January -
April: Music
Information Retrieval
Begin
building Intelligent Music Software - Intellitrance
2005 May – August: Background
Research on Musical Robotics
Begin
Working on Bayan Robot
Introduction
to Advanced Recording Techniques
Build Wearable Sensors
- KIOM
2005 September – December: Computer
Music Seminar
Brain
Signal Processing using MEG
2006 January – April: Computer Music
Seminar
Complete
Bayan Robot
Begin
Redesign of ESitar
2006 May - December: IntelliTrance for
Robotic Improvisation Composition
2006 – 2008: Research/Complete
Dissertation
[1] Singer, E., Larke, K., Bianciardi, D., “LEMUR
GuitarBot:
[2] Wright, M. and A. Freed. “Open SoundControl: A New
Protocol for Communicating with Sound Synthesizers,” Proceeding of the International Computer
Music Conference. 1997.
[3] Cope, D. The Algorithmic Composer (Computer
Music and Digital Audio Series, V. 16), A-R Editions; Book and CD-ROM edition
(June 2000).
[4] Cope,
D., Experiments in Musical Intelligence (The Computer Music and Digital Audio,
Vol 12), A-R Editions; Book and CD-ROM edition (July 1996).
[5] Rowe,
R., Machine Musicianship, MIT Press; (March 5, 2001).
[6] Cook, P.
R. "Principles for Designing Computer Music Controllers," ACM CHI
Workshop in New Interfaces for Muiscal Expression (NIME),
[7] Wilson, S., M.
Gurevich, B. Verplank, and P. Stang
“Microcontrollers in Music HCI Instruction - Reflections
on our Switch to the Atmel AVR Platform,” Proceedings of the International
Conference on New Instruments for Musical Expression (NIME),
[8] Kapur,
A., G. Essl, P. Davidson, and P. R. Cook. "The Electronic Tabla
Controller," Proceedings of the International Conference on New
Instruments for Musical Expression (NIME),
[9] Mathews, M and W.A. Schloss The Radio Drum as a
Synthesizer Controller. ICMC Ohio State proceedings, 1989.
[10]
Driessen, P. and A. Scholoss. Grant
Proposal to NSERC.
[11]
Puckette, M. and Apel, T. 1998. "Real-time audio analysis tools for Pd and
MSP". Proceedings, International Computer Music Conference.
[12]
Puckette, M. 1996. "Pure Data: another integrated computer music environment."
Proceedings,
[13] Cook, P.
R., and G. Scavone. "The Synthesis ToolKit (STK)," International
Computer Music Conference,
[14] P.F.
Driessen, A. Schloss, "New algorithms and technology for analyzing
gestural data", IEEE Pacific Rim Conf,
[15] Essl,
Georg. “Physical Wave Propagation Modeling for Real-Time Synthesis
of Natural Sounds,” Ph. D. Thesis,
[16] A.
Dumitras, B. Haskell, “An encoder-only texture replacement method or effective
compression of entertainment movie sequences”, IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP02), Orlando, FL, May 13-17,
2002