Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Audio API changes #797

Open
wants to merge 42 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 37 commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
4f524b8
backup
yilinwei Jul 22, 2023
8b1d132
backup
yilinwei Jul 22, 2023
b44eaa8
backup
yilinwei Jul 22, 2023
2aaf2ed
Switch back to using traits for now.
yilinwei Sep 18, 2023
b91df37
typo.
yilinwei Sep 18, 2023
a4edff5
Switch encoding for mima.
yilinwei Sep 24, 2023
9887ce0
Check-in API report
yilinwei Sep 24, 2023
081534d
BlobEvent and MediaRecorder.
zainab-ali Oct 8, 2023
fca6713
Make sure `BlobEvent` is class.
yilinwei Oct 8, 2023
4dda4bf
`data` is required.
yilinwei Oct 8, 2023
a4cfb9a
Add `AudioWorkletNode` and associated options.
yilinwei Nov 15, 2023
0099ad3
Add `Worklet` and `AudioWorklet`.
yilinwei Nov 15, 2023
e8b3650
Fix signature
yilinwei Nov 15, 2023
1178935
Add `AudioParamDescriptor`.
yilinwei Nov 15, 2023
fdb9aad
Add `defaultValue` for `AudioParamDescriptor`.
yilinwei Nov 15, 2023
c067de2
Make sure to extend `js.Object`.
yilinwei Nov 15, 2023
ba8f619
Add `AudioWorkletGlobalScope`.
yilinwei Nov 15, 2023
3e32f25
`AudioWorkletNode` should not be abstract.
yilinwei Nov 16, 2023
42275a7
Make `ReadOnlyMapLike` extend `js.Iterable`.
yilinwei Nov 16, 2023
0e90800
`self` does not yet exist within the `Worklet` contexts.
yilinwei Nov 16, 2023
f860eaa
Correct `ReadOnlyMapLike` signature `forEach`.
yilinwei Nov 16, 2023
b548118
Add docs.
zainab-ali Dec 2, 2023
2d1f240
Add docs.
zainab-ali Dec 2, 2023
f7adab3
Doc improvements.
zainab-ali Dec 18, 2023
56d513b
Add js.native annotation to AudioParamAutomationRate.
zainab-ali Dec 18, 2023
6781565
More docs.
zainab-ali Dec 18, 2023
7d6eb4e
Add js.native annotation to AudioTimestamp.
zainab-ali Dec 18, 2023
d159170
Correct type of params for AudioWorkletProcessor.
zainab-ali Dec 18, 2023
3bac38d
WorkletOptions should extend js.Object.
zainab-ali Dec 18, 2023
e32a80c
Add MediaRecorder and options.
zainab-ali Dec 18, 2023
c221e2b
Correct scaladoc.
zainab-ali Dec 18, 2023
824092d
Api reports.
zainab-ali Dec 18, 2023
e637830
AudioWorkletGlobalScope should be an abstract class.
zainab-ali Dec 29, 2023
314c67b
AudioScheduledSourceNode should be an abstract class.
zainab-ali Dec 29, 2023
9923b6b
MediaElementAudioSourceNode mediaElement should be a def.
zainab-ali Dec 29, 2023
98af177
Regenerate api reports.
zainab-ali Dec 29, 2023
18a6f7d
Add docs for ReadOnlyMapLike.
zainab-ali Dec 29, 2023
df8e9cf
Reformat doc comments.
zainab-ali Jan 28, 2024
523266a
Remove redundant comment.
zainab-ali Jan 28, 2024
07dcf43
Remove channelCount, channelCountMode and channelInterpretation.
zainab-ali Jan 28, 2024
b3a694e
Refactor enums for Scala 3.
zainab-ali Jan 28, 2024
e305129
Regenerate API reports.
zainab-ali Jan 28, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
326 changes: 274 additions & 52 deletions api-reports/2_12.txt

Large diffs are not rendered by default.

326 changes: 274 additions & 52 deletions api-reports/2_13.txt

Large diffs are not rendered by default.

26 changes: 15 additions & 11 deletions dom/src/main/scala/org/scalajs/dom/AudioBufferSourceNode.scala
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ import scala.scalajs.js
* - Channel count: defined by the associated AudioBuffer
*/
@js.native
trait AudioBufferSourceNode extends AudioNode {
trait AudioBufferSourceNode extends AudioScheduledSourceNode {

/** Is an AudioBuffer that defines the audio asset to be played, or when set to the value null, defines a single
* channel of silence.
Expand Down Expand Up @@ -63,16 +63,20 @@ trait AudioBufferSourceNode extends AudioNode {
* The duration parameter, which defaults to the length of the asset minus the value of offset, defines the length
* of the portion of the asset to be played.
*/
def start(when: Double = js.native, offset: Double = js.native, duration: Double = js.native): Unit = js.native
def start(when: Double, offset: Double, duration: Double): Unit = js.native

/** Schedules the end of the playback of an audio asset.
*
* @param when
* The when parameter defines when the playback will stop. If it represents a time in the past, the playback will
* end immediately. If this method is called twice or more, an exception is raised.
*/
def stop(when: Double = js.native): Unit = js.native
def start(when: Double, offset: Double): Unit = js.native

}

object AudioBufferSourceNode {

import js.`|`.undefOr2jsAny

/** Is an EventHandler containing the callback associated with the ended event. */
var onended: js.Function1[Event, _] = js.native
def apply(context: BaseAudioContext,
options: js.UndefOr[AudioBufferSourceNodeOptions] = js.undefined): AudioBufferSourceNode = {
js.Dynamic
.newInstance(js.Dynamic.global.AudioBufferSourceNode)(context, options)
.asInstanceOf[AudioBufferSourceNode]
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
/** Documentation is thanks to Mozilla Contributors at https://developer.mozilla.org/en-US/docs/Web/API and available
* under the Creative Commons Attribution-ShareAlike v2.5 or later. http://creativecommons.org/licenses/by-sa/2.5/
*
* Everything else is under the MIT License http://opensource.org/licenses/MIT
*/
package org.scalajs.dom

import scala.scalajs.js

trait AudioBufferSourceNodeOptions extends js.Object {

/** An instance of [[AudioBuffer]] to be played. */
var buffer: js.UndefOr[AudioBuffer] = js.undefined

/** Indicates whether the audio should play in a loop. The default is false. If the loop is dynamically modified
* during playback, the new value will take effect on the next processing block of audio.
*/
var loop: js.UndefOr[Boolean] = js.undefined

/** An optional value in seconds, where looping should begin if the loop attribute is true. The default is 0. It's
* sensible to set this to a value between 0 and the duration of the buffer. If loopStart is less than 0, looping
* will begin at 0. If loopStart is greater than the duration of the buffer, looping will begin at the end of the
* buffer. This attribute is converted to an exact sample frame offset within the buffer, by multiplying by the
* buffer's sample rate and rounding to the nearest integer value. Thus, its behavior is independent of the value of
* the playbackRate parameter.
*/
var loopStart: js.UndefOr[Double] = js.undefined

/** An optional value, in seconds, where looping should end if the loop attribute is true. The default is 0. Its value
* is exclusive to the content of the loop. The sample frames, comprising the loop, run from the values loopStart to
* loopEnd-(1/sampleRate). It's sensible to set this to a value between 0 and the duration of the buffer. If loopEnd
* is less than 0, looping will end at 0. If loopEnd is greater than the duration of the buffer, looping will end at
* the end of the buffer. This attribute is converted to an exact sample frame offset within the buffer, by
* multiplying by the buffer's sample rate and rounding to the nearest integer value. Thus, its behavior is
* independent of the value of the playbackRate parameter.
*/
var loopEnd: js.UndefOr[Double] = js.undefined

/** A value in cents to modulate the speed of audio stream rendering. Its nominal range is (-∞ to +∞). The default is
* 0.
*/
var detune: js.UndefOr[Double] = js.undefined

/** The speed at which to render the audio stream. Its default value is 1. This parameter is k-rate. This is a
* compound parameter with detune. Its nominal range is (-∞ to +∞).
*/
var playbackRate: js.UndefOr[Double] = js.undefined

/** Represents an integer used to determine how many channels are used when up-mixing and down-mixing connections to
* any inputs to the node. (See AudioNode.channelCount for more information.) Its usage and precise definition depend
* on the value of channelCountMode.
*/
var channelCount: js.UndefOr[Int] = js.undefined

/** Represents an enumerated value describing the way channels must be matched between the node's inputs and outputs.
* (See AudioNode.channelCountMode for more information including default values.)
*/
var channelCountMode: js.UndefOr[AudioNodeChannelCountMode] = js.undefined

/** Represents an enumerated value describing the meaning of the channels. This interpretation will define how audio
* up-mixing and down-mixing will happen. The possible values are "speakers" or "discrete". (See
* AudioNode.channelCountMode for more information including default values.)
*/
var channelInterpretation: js.UndefOr[AudioNodeChannelInterpretation] = js.undefined
}
138 changes: 13 additions & 125 deletions dom/src/main/scala/org/scalajs/dom/AudioContext.scala
Original file line number Diff line number Diff line change
Expand Up @@ -17,98 +17,15 @@ import scala.scalajs.js.annotation._
*/
@js.native
@JSGlobal
class AudioContext extends EventTarget {
class AudioContext extends BaseAudioContext {

/** Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0 and
* cannot be stopped, paused or reset.
/** Returns the number of seconds of processing latency incurred by the AudioContext passing the audio from the
* AudioDestinationNode to the audio subsystem.
*/
def currentTime: Double = js.native
def baseLatency: Double = js.native

/** Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought
* of as the audio-rendering device.
*/
val destination: AudioDestinationNode = js.native

/** Returns the AudioListener object, used for 3D spatialization. */
val listener: AudioListener = js.native

/** Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The
* sample-rate of an AudioContext cannot be changed.
*/
val sampleRate: Double = js.native

/** Returns the current state of the AudioContext. */
def state: String = js.native

/** Closes the audio context, releasing any system audio resources that it uses. */
def close(): js.Promise[Unit] = js.native

/** Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data
* visualisations.
*/
def createAnalyser(): AnalyserNode = js.native

/** Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter
* types: high-pass, low-pass, band-pass, etc.
*/
def createBiquadFilter(): BiquadFilterNode = js.native

/** Creates a new, empty AudioBuffer object, which can then be populated by data and played via an
* AudioBufferSourceNode.
*
* @param numOfChannels
* An integer representing the number of channels this buffer should have. Implementations must support a minimum
* 32 channels.
* @param length
* An integer representing the size of the buffer in sample-frames.
* @param sampleRate
* The sample-rate of the linear audio data in sample-frames per second. An implementation must support
* sample-rates in at least the range 22050 to 96000.
*/
def createBuffer(numOfChannels: Int, length: Int, sampleRate: Int): AudioBuffer = js.native

/** Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an
* AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by
* AudioContext.decodeAudioData when it successfully decodes an audio track.
*/
def createBufferSource(): AudioBufferSourceNode = js.native

/** Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio
* stream.
*
* @param numberOfInputs
* The number of channels in the input audio streams, which the output stream will contain; the default is 6 is
* this parameter is not specified.
*/
def createChannelMerger(numberOfInputs: Int = js.native): ChannelMergerNode = js.native

/** Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them
* separately.
*
* @param numberOfOutputs
* The number of channels in the input audio stream that you want to output separately; the default is 6 is this
* parameter is not specified.
*/
def createChannelSplitter(numberOfOutputs: Int = js.native): ChannelSplitterNode = js.native

/** Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a
* reverberation effect.
*/
def createConvolver(): ConvolverNode = js.native

/** Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also
* useful to create feedback loops in a Web Audio API graph.
*
* @param maxDelayTime
* The maximum amount of time, in seconds, that the audio signal can be delayed by. The default value is 0.
*/
def createDelay(maxDelayTime: Int): DelayNode = js.native

/** Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal. */
def createDynamicsCompressor(): DynamicsCompressorNode = js.native

/** Creates a GainNode, which can be used to control the overall volume of the audio graph. */
def createGain(): GainNode = js.native
/** Returns an estimation of the output latency of the current audio context. */
def outputLatency: Double = js.native

/** Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate
* audio from <video> or <audio> elements.
Expand All @@ -131,47 +48,18 @@ class AudioContext extends EventTarget {
*/
def createMediaStreamDestination(): MediaStreamAudioDestinationNode = js.native

/** Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone. */
def createOscillator(): OscillatorNode = js.native

/** Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space. */
def createPanner(): PannerNode = js.native

/** Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an
* OscillatorNode.
*/
def createPeriodicWave(real: js.typedarray.Float32Array, imag: js.typedarray.Float32Array): PeriodicWave = js.native

/** Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source. */
def createStereoPanner(): StereoPannerNode = js.native

/** Creates a WaveShaperNode, which is used to implement non-linear distortion effects. */
def createWaveShaper(): WaveShaperNode = js.native

/** Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually
* loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only
* works on complete files, not fragments of audio files.
*
* @param audioData
* An ArrayBuffer containing the audio data to be decoded, usually grabbed from an XMLHttpRequest's response
* attribute after setting the responseType to arraybuffer.
* @param successCallback
* A callback function to be invoked when the decoding successfully finishes. The single argument to this callback
* is an AudioBuffer representing the decoded PCM audio data. Usually you'll want to put the decoded data into an
* AudioBufferSourceNode, from which it can be played and manipulated how you want.
* @param errorCallback
* An optional error callback, to be invoked if an error occurs when the audio data is being decoded.
*/
def decodeAudioData(
audioData: js.typedarray.ArrayBuffer, successCallback: js.Function1[AudioBuffer, _] = js.native,
errorCallback: js.Function0[_] = js.native
): js.Promise[AudioBuffer] = js.native

/** Resumes the progression of time in an audio context that has previously been suspended. */
def resume(): js.Promise[Unit] = js.native

/** Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing
* CPU/battery usage in the process.
*/
def suspend(): js.Promise[Unit] = js.native

/** Closes the audio context, releasing any system audio resources that it uses. */
def close(): js.Promise[Unit] = js.native

/** Returns a new AudioTimestamp object containing two audio timestamp values relating to the current audio context.
*/
def getOutputTimestamp: AudioTimestamp = js.native
}
6 changes: 2 additions & 4 deletions dom/src/main/scala/org/scalajs/dom/AudioNode.scala
Original file line number Diff line number Diff line change
Expand Up @@ -47,14 +47,12 @@ trait AudioNode extends EventTarget {

/** Represents an enumerated value describing the way channels must be matched between the node's inputs and outputs.
*/
var channelCountMode: Int = js.native
var channelCountMode: AudioNodeChannelCountMode = js.native

/** Represents an enumerated value describing the meaning of the channels. This interpretation will define how audio
* up-mixing and down-mixing will happen.
*
* The possible values are "speakers" or "discrete".
*/
var channelInterpretation: String = js.native
var channelInterpretation: AudioNodeChannelInterpretation = js.native

/** Allows us to connect one output of this node to one input of another node. */
def connect(audioNode: AudioNode): Unit = js.native
Expand Down
30 changes: 30 additions & 0 deletions dom/src/main/scala/org/scalajs/dom/AudioNodeChannelCountMode.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
/** Documentation is thanks to Mozilla Contributors at https://developer.mozilla.org/en-US/docs/Web/API and available
* under the Creative Commons Attribution-ShareAlike v2.5 or later. http://creativecommons.org/licenses/by-sa/2.5/
*
* Everything else is under the MIT License http://opensource.org/licenses/MIT
*/
package org.scalajs.dom

import scala.scalajs.js

@js.native
/** Represents an enumerated value describing the way channels must be matched between the AudioNode's inputs and
* outputs.
*/
sealed trait AudioNodeChannelCountMode extends js.Any

object AudioNodeChannelCountMode {

/** The number of channels is equal to the maximum number of channels of all connections. In this case, channelCount
* is ignored and only up-mixing happens.
*/
val max: AudioNodeChannelCountMode = "max".asInstanceOf[AudioNodeChannelCountMode]

/** The number of channels is equal to the maximum number of channels of all connections, clamped to the value of
* channelCount.
*/
val `clamped-max`: AudioNodeChannelCountMode = "clamped-max".asInstanceOf[AudioNodeChannelCountMode]

/** The number of channels is defined by the value of channelCount. */
val explicit: AudioNodeChannelCountMode = "explicit".asInstanceOf[AudioNodeChannelCountMode]
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
/** Documentation is thanks to Mozilla Contributors at https://developer.mozilla.org/en-US/docs/Web/API and available
* under the Creative Commons Attribution-ShareAlike v2.5 or later. http://creativecommons.org/licenses/by-sa/2.5/
*
* Everything else is under the MIT License http://opensource.org/licenses/MIT
*/
package org.scalajs.dom

import scala.scalajs.js

@js.native
/** Represents an enumerated value describing how input channels are mapped to output channels when the number of
* inputs/outputs is different. For example, this setting defines how a mono input will be up-mixed to a stereo or 5.1
* channel output, or how a quad channel input will be down-mixed to a stereo or mono output.
*/
sealed trait AudioNodeChannelInterpretation extends js.Any

object AudioNodeChannelInterpretation {

/** Use set of "standard" mappings for combinations of common speaker input and outputs setups (mono, stereo, quad,
* 5.1). For example, with this setting a mono channel input will output to both channels of a stereo output.
*/
val speakers: AudioNodeChannelInterpretation = "speakers".asInstanceOf[AudioNodeChannelInterpretation]

/** Input channels are mapped to output channels in order. If there are more inputs that outputs the additional inputs
* are dropped; if there are fewer than the unused outputs are silent.
*/
val discrete: AudioNodeChannelInterpretation = "discrete".asInstanceOf[AudioNodeChannelInterpretation]
}
6 changes: 6 additions & 0 deletions dom/src/main/scala/org/scalajs/dom/AudioParam.scala
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,12 @@ trait AudioParam extends AudioNode {
/** Represents the initial value of the attributes as defined by the specific AudioNode creating the AudioParam. */
val defaultValue: Double = js.native

/** Represents the maximum possible value for the parameter's nominal (effective) range. */
val maxValue: Double = js.native

/** Represents the minimum possible value for the parameter's nominal (effective) range. */
val minValue: Double = js.native

/** Schedules an instant change to the value of the AudioParam at a precise time, as measured against
* AudioContext.currentTime. The new value is given in the value parameter.
*
Expand Down
22 changes: 22 additions & 0 deletions dom/src/main/scala/org/scalajs/dom/AudioParamAutomationRate.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
/** Documentation is thanks to Mozilla Contributors at https://developer.mozilla.org/en-US/docs/Web/API and available
* under the Creative Commons Attribution-ShareAlike v2.5 or later. http://creativecommons.org/licenses/by-sa/2.5/
*
* Everything else is under the MIT License http://opensource.org/licenses/MIT
*/
package org.scalajs.dom

import scala.scalajs.js

@js.native
sealed trait AudioParamAutomationRate extends js.Any

object AudioParamAutomationRate {

/** An a-rate [[AudioParam]] takes the current audio parameter value for each sample frame of the audio signal. */
val `a-rate`: AudioParamAutomationRate = "a-rate".asInstanceOf[AudioParamAutomationRate]

/** A k-rate [[AudioParam]] uses the same initial audio parameter value for the whole block processed; that is, 128
* sample frames. In other words, the same value applies to every frame in the audio as it's processed by the node.
*/
val `k-rate`: AudioParamAutomationRate = "k-rate".asInstanceOf[AudioParamAutomationRate]
}
Loading
Loading