[Main Page/Index] [Examples] [FAQ] [Plans] [Man pages] [Design] [Use-cases] [ecasound home page]


ecasound documentation - design


Kai Vehmanen

240799

Table of Contents

1: Preface

2: Development

2.1: Open design

2.2: System vs interface

2.3: Use cases

2.4: Sanity checks

3: Signal flow

4: Class descriptions

4.1: Core

4.1.1: ECA_PROCESSOR
4.1.2: ECA_SESSION
4.1.3: ECA_CONTROLLER

4.2: General

4.2.1: ECA_CHAINSETUP
4.2.2: ECA_RESOURCES

4.3: Data objects

4.3.1: CHAIN
4.3.2: SAMPLEBUFFER

4.4: Audio input/output

4.4.1: AUDIO_IO_DEVICE



1: Preface

Notice! This is not the actual class/source code documentation. This page is a collection of design notes and thoughts. I've noticed that by trying to write a good class description (roles, interface, use), you quickly find out about possible desing flaws.

2: Development

2.1: Open design

Although specific use-cases are used for testing design concepts, they are not be considered as development goals.

2.2: System vs interface

System design is kept separate from user interface design and feature implementation.

2.3: Use cases

Use-cases are used extensively when designing the user interface.

2.4: Sanity checks

Sanity checks are done only to prevent crashes. All effects and operators happily accept "insane" parameters. :)

3: Signal flow

All the necessary information about signal flow is found from CHAIN objects. Currently signals can't be redirected from one chain to another. You can still assing inputs and outputs to multiple chains.

4: Class descriptions

4.1: Core

4.1.1: ECA_PROCESSOR

This class is the actual processing engine. It is initialized with a pointer to a ECA_SESSION object, which has all information needed at runtime. Processing is started with the exec() member function and after that, ECA_PROCESSOR runs on its own. If the interactive mode is enabled in ECA_SESSION, ECA_PROCESSOR can be controlled using the ECA_CONTROLLER class. It offers a safe way to control ecasound. Another way to communicate with ECA_PROCESSOR is to access the ECA_SESSION object directly.

4.1.2: ECA_SESSION

A session contains all ECA_CHAINSETUP objects and general runtime settings (iactive-mode, debug-level, etc). Only one ECA_CHAINSETUP can be active at a time. To make it easier to control how threads access ECA_SESSION, only ECA_PROCESSOR and ECA_CONTROLLER classes have direct access to ECA_SESSION data and functions. Other classes can only use const members of ECA_SESSION.

4.1.3: ECA_CONTROLLER

Represents the interactive mode of ecasound. It takes string commands and interprets them. Then it either performs the task itself or passes the command to the engine (ECA_PROCESSOR). It also offers functions for modifying ECA_SESSION data. In some rare cases (likes for instance when quitting ecasound) it throws an exception.

4.2: General

4.2.1: ECA_CHAINSETUP

ECA_CHAINSETUP represents a group of CHAINs and info about how they are connected. ECA_CHAINSETUP can be constructed from a COMMAND_LINE object or it can be loaded from a ascii file. It's also possible to save a ECA_CHAINSETUP to a simple ascii file. The syntax is identical to command line syntax.

4.2.2: ECA_RESOURCES

This class is an interface to the ~/.ecasouncrc configuration file.

4.3: Data objects

4.3.1: CHAIN

A CHAIN represents a single signal flow abstraction. Every CHAIN has one slot for an audio input device and one slot for output. Because these device objects are unique and can be assigned to multiple chains, id numbers are used to refer to the actual device objects. In addition, CHAIN has a name, a SAMPLE_BUFFER object and a vector of CHAIN_OPERATORs which operate on the sample data.

4.3.2: SAMPLEBUFFER

Basic unit for representing sample data. The data type used to represent a single sample, value range, channel count, sampling rate and system endianess are all specified in "samplebuffer.h".

4.4: Audio input/output

4.4.1: AUDIO_IO_DEVICE

AUDIO_IO_DEVICE is a virtual base class for all audio io-devices. Audio devices can be opened in the following modes: read, write or read_write. Input and output routines take pointers to SAMPLE_BUFFER objects as their arguments. All devices are either non-realtime (normal files) or realtime (soundcards). Realtime means that once device is started, data input/output doesn't depend on calls to AUDIO_IO_DEVICE object. With non-realtime devices, input and output is done only when requested. Due to performance issues, much of the responsibility is given to the user of these classes. It's not impossible to write to an object opened for reading (probably with disastrous results).