WebAudio#

This domain allows inspection of Web Audio API. https://webaudio.github.io/web-audio-api/

This CDP domain is experimental.

Types#

Generally, you do not need to instantiate CDP types yourself. Instead, the API creates objects for you as return values from commands, and then you can use those objects as arguments to other commands.

class GraphObjectId[source]#

An unique ID for a graph object (AudioContext, AudioNode, AudioParam) in Web Audio API

class ContextType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#

Enum of BaseAudioContext types

REALTIME = 'realtime'#
OFFLINE = 'offline'#
class ContextState(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#

Enum of AudioContextState from the spec

SUSPENDED = 'suspended'#
RUNNING = 'running'#
CLOSED = 'closed'#
INTERRUPTED = 'interrupted'#
class NodeType[source]#

Enum of AudioNode types

class ChannelCountMode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#

Enum of AudioNode::ChannelCountMode from the spec

CLAMPED_MAX = 'clamped-max'#
EXPLICIT = 'explicit'#
MAX_ = 'max'#
class ChannelInterpretation(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#

Enum of AudioNode::ChannelInterpretation from the spec

DISCRETE = 'discrete'#
SPEAKERS = 'speakers'#
class ParamType[source]#

Enum of AudioParam types

class AutomationRate(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#

Enum of AudioParam::AutomationRate from the spec

A_RATE = 'a-rate'#
K_RATE = 'k-rate'#
class ContextRealtimeData(current_time, render_capacity, callback_interval_mean, callback_interval_variance)[source]#

Fields in AudioContext that change in real-time.

current_time: float#

The current context time in second in BaseAudioContext.

render_capacity: float#

The time spent on rendering graph divided by render quantum duration, and multiplied by 100. 100 means the audio renderer reached the full capacity and glitch may occur.

callback_interval_mean: float#

A running mean of callback interval.

callback_interval_variance: float#

A running variance of callback interval.

class BaseAudioContext(context_id, context_type, context_state, callback_buffer_size, max_output_channel_count, sample_rate, realtime_data=None)[source]#

Protocol object for BaseAudioContext

context_id: GraphObjectId#
context_type: ContextType#
context_state: ContextState#
callback_buffer_size: float#

Platform-dependent callback buffer size.

max_output_channel_count: float#

Number of output channels supported by audio hardware in use.

sample_rate: float#

Context sample rate.

realtime_data: Optional[ContextRealtimeData] = None#
class AudioListener(listener_id, context_id)[source]#

Protocol object for AudioListener

listener_id: GraphObjectId#
context_id: GraphObjectId#
class AudioNode(node_id, context_id, node_type, number_of_inputs, number_of_outputs, channel_count, channel_count_mode, channel_interpretation)[source]#

Protocol object for AudioNode

node_id: GraphObjectId#
context_id: GraphObjectId#
node_type: NodeType#
number_of_inputs: float#
number_of_outputs: float#
channel_count: float#
channel_count_mode: ChannelCountMode#
channel_interpretation: ChannelInterpretation#
class AudioParam(param_id, node_id, context_id, param_type, rate, default_value, min_value, max_value)[source]#

Protocol object for AudioParam

param_id: GraphObjectId#
node_id: GraphObjectId#
context_id: GraphObjectId#
param_type: ParamType#
rate: AutomationRate#
default_value: float#
min_value: float#
max_value: float#

Commands#

Each command is a generator function. The return type Generator[x, y, z] indicates that the generator yields arguments of type x, it must be resumed with an argument of type y, and it returns type z. In this library, types x and y are the same for all commands, and z is the return type you should pay attention to. For more information, see Getting Started: Commands.

disable()[source]#

Disables the WebAudio domain.

Return type:

Generator[Dict[str, Any], Dict[str, Any], None]

enable()[source]#

Enables the WebAudio domain and starts sending context lifetime events.

Return type:

Generator[Dict[str, Any], Dict[str, Any], None]

get_realtime_data(context_id)[source]#

Fetch the realtime data from the registered contexts.

Parameters:

context_id (GraphObjectId) –

Return type:

Generator[Dict[str, Any], Dict[str, Any], ContextRealtimeData]

Returns:

Events#

Generally, you do not need to instantiate CDP events yourself. Instead, the API creates events for you and then you use the event’s attributes.

class ContextCreated(context)[source]#

Notifies that a new BaseAudioContext has been created.

context: BaseAudioContext#
class ContextWillBeDestroyed(context_id)[source]#

Notifies that an existing BaseAudioContext will be destroyed.

context_id: GraphObjectId#
class ContextChanged(context)[source]#

Notifies that existing BaseAudioContext has changed some properties (id stays the same)..

context: BaseAudioContext#
class AudioListenerCreated(listener)[source]#

Notifies that the construction of an AudioListener has finished.

listener: AudioListener#
class AudioListenerWillBeDestroyed(context_id, listener_id)[source]#

Notifies that a new AudioListener has been created.

context_id: GraphObjectId#
listener_id: GraphObjectId#
class AudioNodeCreated(node)[source]#

Notifies that a new AudioNode has been created.

node: AudioNode#
class AudioNodeWillBeDestroyed(context_id, node_id)[source]#

Notifies that an existing AudioNode has been destroyed.

context_id: GraphObjectId#
node_id: GraphObjectId#
class AudioParamCreated(param)[source]#

Notifies that a new AudioParam has been created.

param: AudioParam#
class AudioParamWillBeDestroyed(context_id, node_id, param_id)[source]#

Notifies that an existing AudioParam has been destroyed.

context_id: GraphObjectId#
node_id: GraphObjectId#
param_id: GraphObjectId#
class NodesConnected(context_id, source_id, destination_id, source_output_index, destination_input_index)[source]#

Notifies that two AudioNodes are connected.

context_id: GraphObjectId#
source_id: GraphObjectId#
destination_id: GraphObjectId#
source_output_index: Optional[float]#
destination_input_index: Optional[float]#
class NodesDisconnected(context_id, source_id, destination_id, source_output_index, destination_input_index)[source]#

Notifies that AudioNodes are disconnected. The destination can be null, and it means all the outgoing connections from the source are disconnected.

context_id: GraphObjectId#
source_id: GraphObjectId#
destination_id: GraphObjectId#
source_output_index: Optional[float]#
destination_input_index: Optional[float]#
class NodeParamConnected(context_id, source_id, destination_id, source_output_index)[source]#

Notifies that an AudioNode is connected to an AudioParam.

context_id: GraphObjectId#
source_id: GraphObjectId#
destination_id: GraphObjectId#
source_output_index: Optional[float]#
class NodeParamDisconnected(context_id, source_id, destination_id, source_output_index)[source]#

Notifies that an AudioNode is disconnected to an AudioParam.

context_id: GraphObjectId#
source_id: GraphObjectId#
destination_id: GraphObjectId#
source_output_index: Optional[float]#