Next
Previous
Table of Contents
This just does nothing. It is only useful for test situations.
This crossfades two signals. If the percentage input is -1, only the left
signal is heard, if it is 1, only the right signal is heard. When it is 0,
both signals a heard with the same volume.
This allows you to ensure that your signal stays in a well defined range.
If you had two signals that were between -1 and 1 before crossfading, they
will be in the same range after crossfading.
The opposite of a crossfader. This takes a mono signal and splits it into
a stereo signal: It is used to automatically pan the input signal between
the left and the right output. This makes mixes more lively. A standard
application would be a guitar or lead sound.
Connect a LFO, a sine or saw wave for example to inlfo.
and select a frequency between 0.1 and 5Hz for a traditional effect or even
more for Special FX.
This multiplies a signal by a factor. You can use this to scale signals
down (0 < factor < 1) or up (factor > 1) or invert signals
(factor < 0). Note that the factor may be a signal and don't has to
be constant (e.g. envelope or real signal).
This adds two signals.
This delays the input signal for an amount of time. The time specification
must be between 0 and 1 for a delay between 0 seconds and 1 second.
This kind of delay may not be used in feedback structures. This
is because it's a variable delay. You can modify it's length while it is
running, and even set it down to zero. But since in a feedback structure
the own output is needed to calculate the next samples, a delay whose value
could drop to zero during synthesis could lead to a stall situation.
Use CDELAYs in that setup, perhaps combine a small constant delay (of 0.001
seconds) with a flexible delay.
You can also combine a CDELAY and a DELAY to achieve a variable length delay
with a minimum value in a feedback loop. Just make sure that you have a
CDELAY involved.
This delays the input signal for an amount of time. The time specification
must be between 0 and 1 for a delay between 0 seconds and 1 second. The
delay is constant during the calculation, that means it can't be modified.
This saves computing time as no interpolation is done, and is useful for
recursive structures. See description above (Synth_DELAY).
A flanger is a time-varying delay effect. To make development of complex
flanger effects simpler, this module is provided, which contains the core
of a one-channel flanger.
It has the following ports:
- invalue
The signal which you want to process.
- lfo
Preferably a sine wave which modulates the delay time inside the
flanger (-1 .. 1).
- mintime
The minimum value for the delay inside the flanger in milliseconds.
Suggested values: try something like 1 ms. Please use values < 1000 ms.
- maxtime
The minimum value for the delay inside the flanger in milliseconds.
Suggested values: try something like 5 ms. Please use values < 1000 ms.
- outvalue
The output signal. It is important that you mix that with the
original (unflanged) signal to get the desired effect.
Hint: you can use this as a basis for a chorus effect.
All oscillators in aRts don't require a frequency as input, but a position
in the wave. The position should be between 0 and 1, which maps for a
standard Synth_WAVE_SIN object to the range 0..2*pi. To generate
oscillating values from a frequency, a Synth_FREQUENCY modules is used.
This is used for frequency modulation. Put your frequency to the frequency
input and put another signal on the modulator input. Then set modlevel to
something, say 0.3. The frequency will be modulated with modulator then.
Just try it. Works nice when you put a feedback in there, that means take
a combination of the delayed output signal from the Synth_FM_SOURCE
(you need to put it to some oscillator as it only takes the role of
Synth_FREQUENCY) and some other signal to get good results.
Works nicely in combination with Synth_WAVE_SIN oscillators.
Sinus oscillator. Put a pos signal from Synth_FREQUENCY or Synth_FM_SOURCE
at the input. And get a sinus wave as output. The pos signal specifies the
position in the wave, the range 0..1 is mapped to 0..2*pi internally.
Triangle oscillator. Put a pos signal from Synth_FREQUENCY or Synth_FM_SOURCE
at the input. And get a triangle wave as output. The pos signal specifies the
position in the wave, the range 0..1 is mapped to 0..2*pi internally. Be
careful. The input signal *MUST* be in the range 0..1 for the output signal
to produce good results.
Square oscillator. Put a pos signal from Synth_FREQUENCY or Synth_FM_SOURCE
at the input. And get a square wave as output. The pos signal specifies the
position in the wave, the range 0..1 is mapped to 0..2*pi internally. Be
careful. The input signal *MUST* be in the range 0..1 for the output signal
to produce good results.
You need a Synth_PLAY-module to hear the output you are creating. The
third parameter (channels) should be set to 1 or 2 to select mono or
stereo sound. The left and right channels should contain the
normalized input for the channels. For mono, only left is used.
If your input is not between -1 and 1, you get noise. You can for instance
use Synth_ATAN_SATURATE to ensure that the signal is in the right range.
There may only be one Synth_PLAY module used, as this one directly accesses
your soundcard. Use busses if you want to mix more than one audio stream
together before playing.
Note that Synth_PLAY also does the timing of the whole structure. This
means: no Synth_PLAY = no source for timing = no sound. So you absolutely
need (exactly) one Synth_PLAY object.
If you want to capture what Arts does, you can do this now. Use the
Synth_FILEPLAY module. The "syntax" is the same as Synth_PLAY:
You have to connect the left channel only when doing mono output, both
channels when doing stereo output. Then you specify the number of channels
at the channels-port, either 1 (mono) or 2 (stereo). Arts then dumps
the things it plays into the file /tmp/arts.wav. I overwrites the
file when it exists.
IMPORTANT: always use this and a Synth_PLAY-module,
since otherwise the scheduling won't work correctly.
You can use this for debugging. It will print out the value of the signal
at invalue in regular intervals (ca. 1 second), combined with the comment
you have specified. That way you can find out if some signals stay in
certain ranges, or if they are there at all.
This will play a wav file. It will only be present if you have libaudiofile
on your computer. (I'll probably rewrite that code though). The wave file
will start as soon as the structure gets created. It will stop as soon as
it's over, then done will be set to 1 (for instance for structure killing).
This will play an akai sample file. The akai sample file will start as soon
as the structure gets created. It will stop as soon as it's over, then done
will be set to 1 (for instance for structure killing).
As AKAI sample files contain a root pitch and a recording frequency, you
can do pitch shifting with akai samples. So you can specify the desired
frequency, and the result will be pitchshifted accordingly. The pitch-
shifting will also affect the length of the sample.
The same as Synth_PLAY_AKAI, but that it is playing two AKAI sample files
at once, one left one right. This is useful due to the fact that AKAI samples
don't support real stereo - using two Synth_PLAY_AKAIs would be possible
as well, but would cost more CPU power. Here, pitchshifting is only calculated
once - the samples are assumed to be recorded with the same sampling rate
and on the same pitch, otherwise this will of course not work.
An uplink to a bus. Give signals to left and right, and the name of the bus
where the data should go on the "bus" port. The combined signal from all
uplinks with this name will appear on every downlink on that "bus".
Gets (the sum of) all data that is put to a certain bus (with the name
you specify at the "bus" port). To prevent that the signal gets out of
the range, specify the amount of clients at the clients port. The signal
will be devided through this number.
Filters out all frequencies over the cutoff frequency.
Filters out all frequencies over the cutoff frequency (it's a 24db 4pole
filter, which filters -24db per octave above the cutoff frequency), but
offers an additional parameter for tuning the filter resonance, while 0
means no resonance and 4 means self oscillation.
A damped resonator filter filtering all frequencies around some peak value.
There is no useful way of specifying middle frequency (that won't be cut),
since the input are two strange constants f and b. The code is very old,
from the first days of the synthesizer, and will probably replaced by a
new filter which will have a frequency and a resonance value as parameters.
Try something like b=5, f=5 or b=10, f=10 or b=15, f=15 though.
This is a nice parametric equalizer building block. It's parameters are
- invalue, outvalue
The signal that gets filtered by the equalizer.
- low
How low frequencies should be changed. The value is in dB, while 0 means
don't change low frequencies, -6 would mean take them out by 6dB, and +6
mean boost them by 6dB.
- mid
How middle frequencies should be changed by the equalizer in dB (see low).
- high
How high frequencies should be changed by the equalizer in dB (see low).
- frequency
This is the center frequency of the equalizer in Hz, the mid frequencies
are around that spectrum, the low and high frequencies below and above.
Note that the frequency may not be higher than half the sampling rate,
usually that is 22050 Hz, and not lower than 1 Hz.
- q
This influences how broad the mid spectrum is. It must be be a positive
number > 0. A value of one is reasonable, higher values of q mean a
narrower spectrum of middle frequencies. Lower values than one mean a
broader sprectrum.
Will play a sequence of notes over and over again. The notes are given in
tracker notation, and are seperated by semicolons. An example is
A-3;C-4;E-4;C-4;
. The speed is given as seconds per note, so if you
want to get 120 bpm, you will probably specify 0.5 seconds/note, as
60 seconds/0.5 seconds per note=120 bpm.
You can give each note an length relative to the speed by using a colon
after the note and then then length. A-3:2;C-4:0.5;D-4:0.5;E-4;
demonstrates this. As you see, midi composing programs tend to offer
more comfort ;)
The Synth_SEQUENCE gives additional information about the position of
the note it is playing right now, while 0 means just started and 1 means
finished. This information you can use with Synth_PSCALE (see below).
The Synth_PSCALE module will scale the audio stream that is directed
through it from a volume 0 (silent) to 1 (original loudness) back to
0 (silent). According to the position (get the position from Synth_SEQUENCE).
The position where the peak should occur can be given as pos.
Example:
Setting top to 0.1 means that after 10% of the note has been played, the
volume has reached its maximum, and starts decaying afterwards.
This is a classic ADSR envelope which means you specify:
- active
whether the note is being pressed right now by the user
- invalue
the input signal
- attack
the time that should pass between the user presses the note and the signal
reaching it's maximum amplitude (in seconds)
- decay
the time that should pass between the the signal reaching it's maximum
amplitude and the signal going back to some constant level (in seconds)
- sustain
the constant level the signal is held at afterwards, until the user releases
the note
- release
the time that should pass after the user has released the note until the
signal is scaled down to zero (in seconds)
You'll get the scaled signal at outvalue. If the ASDR envelope is finished,
it will set done to 1. You can use Synth_STRUCT_KILL with that, to remove
your structure then.
Removes the current structure when the ready signal is more than 0.5 (read:
when it comes to 1). Useful for ADSR Envelopes or PLAY_WAV in instrument
structures.
Will create the specified structure each time it gets a midi event on the
right channel. To protect you from spending too much CPU time, you can
specify a maximum of structures which should be active at the same time.
If the structure contains an Interface_MIDI_NOTE (it should!), the
MIDI_ROUTER will pass the frequency and other infos to the structure
through it.
IF for the structure you specified exists a control panel, which has to be
called structurename_GUI, it will be supplied with the x,y and parent
parameters and created even before the first key is pressed. Look at
concepts->instruments for details.
You can use this to debug how your midi events are actually arriving in aRts.
When a MIDI_DEBUG is running artsserver will print out a lines like
201 100753.837585 on 0 42 127
202 101323.128355 off 0 42
While the first line would be telling you that 100753ms (that is 100 seconds)
after the MIDI_DEBUG started, a midi on event arrived on channel 0. This
midi on event had the velocity (volume) of 127, the loudest possible. The
next line shows the midi release event.
This can be used to get the input signal to the normalized range between
-1 and 1 that Synth_PLAY can process. The louder the input signal, the
more the signal is distorted by this module. For very small input signals,
the output signal is about the input signal (no change).
There will be a frequency, and a velocity (volume) and a parameter passed
to you, wether this key is still pressed. This information is only filled
out if the structure has been created by a Synth_MIDI_ROUTER.
Don't use Synth_PARAM_GET,Synth_PARAM_SET, Synth_PARAM_SGET and
Synth_PARAM_SSET (unless for testing purposes or if you have read
the source). They may disappear in further aRts releases.
Synth_AMAN_INJECT is used to inject audio data from audio server clients
into the flow system. Don't use that in structures, it is used internally
(see audioman_impl.cc for details). May change/disappear without notice.
Next
Previous
Table of Contents