Code Samples

Use the code samples below to integrate your applications with the API.

Node

Stream a local file

attention

This example uses the Rev AI Node SDK.

The following example can be used to configure a streaming client, stream audio from a file, and obtain the transcript as the audio is processed.

To use this example, replace the <FILEPATH> placeholder with the path to the file you wish to transcribe and the <REVAI_ACCESS_TOKEN> placeholder with your Rev AI account's access token.

Copy
Copied
const revai = require('revai-node-sdk');
const fs = require('fs');

const token = '<REVAI_ACCESS_TOKEN>';
const filePath = '<FILEPATH>';

// Initialize your client with your audio configuration and access token
const audioConfig = new revai.AudioConfig(
    /* contentType */ "audio/x-raw",
    /* layout */      "interleaved",
    /* sample rate */ 16000,
    /* format */      "S16LE",
    /* channels */    1
);

var client = new revai.RevAiStreamingClient(token, audioConfig);

// Create your event responses
client.on('close', (code, reason) => {
    console.log(`Connection closed, ${code}: ${reason}`);
});
client.on('httpResponse', code => {
    console.log(`Streaming client received http response with code: ${code}`);
})
client.on('connectFailed', error => {
    console.log(`Connection failed with error: ${error}`);
})
client.on('connect', connectionMessage => {
    console.log(`Connected with message: ${connectionMessage}`);
})

// Begin streaming session
var stream = client.start();

// Read file from disk
var file = fs.createReadStream(filePath);

stream.on('data', data => {
    console.log(data);
});
stream.on('end', function () {
    console.log("End of Stream");
});

file.on('end', () => {
    client.end();
});

// Stream the file
file.pipe(stream);

// Forcibly ends the streaming session
// stream.end();

Stream and transcribe microphone audio

attention

This example uses the Rev AI Node SDK and the mic package.

The following example can be used to configure your streaming client, send audio as a stream from your microphone input, and obtain the transcript as it is processed.

To use this example, replace the <REVAI_ACCESS_TOKEN> placeholder with your Rev AI access token.

Copy
Copied
const revai = require('revai-node-sdk');
const mic = require('mic');

const token = '<REVAI_ACCESS_TOKEN>';

// initialize client with audio configuration and access token
const audioConfig = new revai.AudioConfig(
    /* contentType */ "audio/x-raw",
    /* layout */      "interleaved",
    /* sample rate */ 16000,
    /* format */      "S16LE",
    /* channels */    1
);

// initialize microphone configuration
// note: microphone device id differs
// from system to system and can be obtained with
// arecord --list-devices and arecord --list-pcms
const micConfig = {
    /* sample rate */ rate: 16000,
    /* channels */    channels: 1,
    /* device id */   device: 'hw:0,0'
};

var client = new revai.RevAiStreamingClient(token, audioConfig);

var micInstance = mic(micConfig);

// create microphone stream
var micStream = micInstance.getAudioStream();

// create event responses
client.on('close', (code, reason) => {
    console.log(`Connection closed, ${code}: ${reason}`);
});
client.on('httpResponse', code => {
    console.log(`Streaming client received http response with code: ${code}`);
});
client.on('connectFailed', error => {
    console.log(`Connection failed with error: ${error}`);
});
client.on('connect', connectionMessage => {
    console.log(`Connected with message: ${connectionMessage}`);
});
micStream.on('error', error => {
  console.log(`Microphone input stream error: ${error}`);
});

// begin streaming session
var stream = client.start();

// create event responses
stream.on('data', data => {
  console.log(data);
});
stream.on('end', function () {
  console.log("End of Stream");
});

// pipe the microphone audio to Rev AI client
micStream.pipe(stream);

// start the microphone
micInstance.start();

// Forcibly ends the streaming session
// stream.end();

Recover from connection errors and timeouts during a stream

attention

This example uses the Rev AI Node SDK.

The following example can be used to configure a streaming client to transcribe a long-duration stream using a RAW-format audio file. It handles reconnects (whether due to session length timeouts or other connectivity interruption) without losing audio. It also re-aligns timestamp offsets to the new streaming session when reconnecting.

To use this example, replace the <FILEPATH> placeholder with the path to the audio file (RAW format) you wish to stream and the <REVAI_ACCESS_TOKEN> placeholder with your Rev AI account's access token.

Copy
Copied
const fs = require('fs');
const revai = require('revai-node-sdk');
const { Writable } = require('stream');

const token = '<REVAI_ACCESS_TOKEN>';
const filePath = '<FILEPATH>';
const bytesPerSample = 2;
const samplesPerSecond = 16000;
const chunkSize = 8000;

// initialize client with audio configuration and access token
const audioConfig = new revai.AudioConfig(
    /* contentType */ 'audio/x-raw',
    /* layout */      'interleaved',
    /* sample rate */ samplesPerSecond,
    /* format */      'S16LE',
    /* channels */    1
);

// optional config to be provided.
const sessionConfig = new revai.SessionConfig(
    metadata='example metadata', /* (optional) metadata */
    customVocabularyID=null,  /* (optional) custom_vocabulary_id */
    filterProfanity=false,    /* (optional) filter_profanity */
    removeDisfluencies=false, /* (optional) remove_disfluencies */
    deleteAfterSeconds=0,     /* (optional) delete_after_seconds */
    startTs=0,                /* (optional) start_ts */
    transcriber='machine',    /* (optional) transcriber */
    detailedPartials=false,   /* (optional) detailed_partials */
    language="en"             /* (optional) language */
);

// begin streaming session
let client = null;
let revaiStream = null;

let audioBackup = [];
let audioBackupCopy = [];
let newStream = true;
let lastResultEndTsReceived = 0.0;

function handleData(data) {
    switch (data.type){
        case 'connected':
            console.log("Received connected");
            break;
        case 'partial':
            console.log(`Partial: ${data.elements.map(x => x.value).join(' ')}`);
            break;
        case 'final':
            console.log(`Final: ${data.elements.map(x => x.value).join('')}`);
            const textElements = data.elements.filter(x => x.type === "text");
            lastResultEndTsReceived = textElements[textElements.length - 1].end_ts;
            console.log(lastResultEndTsReceived * samplesPerSecond * bytesPerSample / 1024);
            break;
        default:
            // all messages from the API are expected to be one of the previous types
            console.error('Received unexpected message');
            break;
    }
}

function startStream() {
    client = new revai.RevAiStreamingClient(token, audioConfig);

    // create event responses
    client.on('close', (code, reason) => {
        console.log(`Connection closed, ${code}: ${reason}`);
        if (code !== 1000 || reason == 'Reached max session lifetime'){
            console.log('Restarting stream');
            restartStream();
        }
        console.log(bytesWritten);
    });
    client.on('httpResponse', code => {
        console.log(`Streaming client received HTTP response with code: ${code}`);
    });
    client.on('connectFailed', error => {
        console.log(`Connection failed with error: ${error}`);
    });
    client.on('connect', connectionMessage => {
        console.log(`Connected with job ID: ${connectionMessage.id}`);
    });

    audioBackup = [];
    sessionConfig.startTs = lastResultEndTsReceived;

    revaiStream = client.start(sessionConfig);
    revaiStream.on('data', data => {
        handleData(data);
    });
    revaiStream.on('end', function () {
        console.log('End of stream');
    });
}

let bytesWritten = 0;

const audioInputStreamTransform = new Writable({
    write(chunk, encoding, next) {
        if (newStream && audioBackupCopy.length !== 0) {
            // approximate math to calculate time of chunks
            const bitsSent = lastResultEndTsReceived * samplesPerSecond * bytesPerSample;
            const chunksSent = Math.floor(bitsSent / chunkSize);
            if (chunksSent !== 0) {
                for (let i = chunksSent; i < audioBackupCopy.length; i++) {
                    revaiStream.write(audioBackupCopy[i][0], audioBackupCopy[i][1]);
                }
            }
            newStream = false;
        }

        audioBackup.push([chunk, encoding]);

        if (revaiStream) {
            revaiStream.write(chunk, encoding);
            bytesWritten += chunk.length;
        }

        next();
    },

    final() {
        if (client && revaiStream) {
            client.end();
            revaiStream.end();
        }
    }
});

function restartStream() {
    if (revaiStream) {
        client.end();
        revaiStream.end();
        revaiStream.removeListener('data', handleData);
        revaiStream = null;
    }

    audioBackupCopy = [];
    audioBackupCopy = audioBackup;

    newStream = true;

    startStream();
}

// read file from disk
let file = fs.createReadStream(filePath);

startStream();

file.on('end', () => {
    chunkInputTransform.end();
})

// array for data left over from chunking writes into chunks of 8000
let leftOverData = null;

const chunkInputTransform = new Writable({
    write(chunk, encoding, next) {
        if (encoding !== 'buffer'){
            console.log(`${encoding} is not buffer, writing directly`);
            audioInputStreamTransform.write(chunk, encoding);
        }
        else {
            let position = 0;

            if (leftOverData != null) {
                let audioChunk = Buffer.alloc(chunkSize);
                const copiedAmount = leftOverData.length;
                console.log(`${copiedAmount} left over, writing with next chunk`);
                leftOverData.copy(audioChunk);
                leftOverData = null;
                chunk.copy(audioChunk, chunkSize - copiedAmount);
                position += chunkSize - copiedAmount;
                audioInputStreamTransform.write(audioChunk, encoding);
            }

            while(chunk.length - position > chunkSize) {
                console.log(`${chunk.length - position} bytes left in chunk, writing with next audioChunk`);
                let audioChunk = Buffer.alloc(chunkSize);
                chunk.copy(audioChunk, 0, position, position+chunkSize);
                position += chunkSize;
                audioInputStreamTransform.write(audioChunk, encoding);
            }

            if (chunk.length > 0) {
                leftOverData = Buffer.alloc(chunk.length - position);
                chunk.copy(leftOverData, 0, position);
            }
        }

        next();
    },

    final() {
        if (leftOverData != null) {
            audioInputStreamTransform.write(leftOverData);
            audioInputStreamTransform.end();
        }
    }
})

// stream the file
file.pipe(chunkInputTransform);

Python

Stream and transcribe an audio file

attention

This example uses the Rev AI Python SDK.

The following example can be used to configure your streaming client, send a stream of audio as a generator, and obtain the transcript as the audio is processed.

To use this example, replace the <FILEPATH> placeholder with the path to the file you wish to transcribe and replace the <REVAI_ACCESS_TOKEN> placeholder with your Rev AI access token.

Copy
Copied
from rev_ai.models import MediaConfig
from rev_ai.streamingclient import RevAiStreamingClient
import io


"""
Name of file to be transcribed
"""
filename = "<FILEPATH>"

"""
String of your access token
"""
access_token = "<REVAI_ACCESS_TOKEN>"

"""
Media configuration of audio file.
This includes the content type, layout, rate, format, and # of channels
"""
config = MediaConfig("audio/x-raw", "interleaved", 16000, "S16LE", 1)

"""
Create client with your access token and media configuration
"""
streamclient = RevAiStreamingClient(access_token, config)

"""
Open file and read data into array.
Practically, stream data would be divided into chunks
"""
with io.open(filename, 'rb') as stream:
    MEDIA_GENERATOR = [stream.read()]

"""
Starts the streaming connection and creates a thread to send bytes from the
MEDIA_GENERATOR. response_generator is a generator yielding responses from
the server
"""
response_generator = streamclient.start(MEDIA_GENERATOR)

"""
Iterates through the responses from the server when obtained
"""
for response in response_generator:
    print(response)

"""
Ends the connection early. Not needed as the server will close the connection
upon receiving an "EOS" message.
"""
streamclient.end()

Stream and transcribe microphone audio

attention

This example uses the Rev AI Python SDK.

The following example can be used to configure your streaming client, send audio as a stream from your microphone input, and obtain the transcript as it is processed.

To use this example, replace the <REVAI_ACCESS_TOKEN> placeholder with your Rev AI access token.

Copy
Copied
import pyaudio
from rev_ai.models import MediaConfig
from rev_ai.streamingclient import RevAiStreamingClient
from six.moves import queue

"""
Insert your access token here
"""
access_token = "<REVAI_ACCESS_TOKEN>"

class MicrophoneStream(object):
    """
    Opens a recording stream as a generator yielding the audio chunks.
    """
    def __init__(self, rate, chunk):
        self._rate = rate
        self._chunk = chunk
        """
        Create a thread-safe buffer of audio data
        """
        self._buff = queue.Queue()
        self.closed = True

    def __enter__(self):
        self._audio_interface = pyaudio.PyAudio()
        self._audio_stream = self._audio_interface.open(
            format=pyaudio.paInt16,
            """
            The API currently only supports 1-channel (mono) audio
            """
            channels=1, rate=self._rate,
            input=True, frames_per_buffer=self._chunk,
            """
            Run the audio stream asynchronously to fill the buffer object.
            This is necessary so that the input device's buffer doesn't
            overflow while the calling thread makes network requests, etc.
            """
            stream_callback=self._fill_buffer,
        )

        self.closed = False

        return self

    def __exit__(self, type, value, traceback):
        self._audio_stream.stop_stream()
        self._audio_stream.close()
        self.closed = True
        """
        Signal the generator to terminate so that the client's
        streaming_recognize method will not block the process termination.
        """
        self._buff.put(None)
        self._audio_interface.terminate()

    def _fill_buffer(self, in_data, frame_count, time_info, status_flags):
        """
        Continuously collect data from the audio stream, into the buffer.
        """
        self._buff.put(in_data)
        return None, pyaudio.paContinue

    def generator(self):
        while not self.closed:
            """
            Use a blocking get() to ensure there's at least one chunk of
            data, and stop iteration if the chunk is None, indicating the
            end of the audio stream.
            """
            chunk = self._buff.get()
            if chunk is None:
                return
            data = [chunk]

            """
            Now consume whatever other data's still buffered.
            """
            while True:
                try:
                    chunk = self._buff.get(block=False)
                    if chunk is None:
                        return
                    data.append(chunk)
                except queue.Empty:
                    break

            yield b''.join(data)


"""
Sampling rate of your microphone and desired chunk size
"""
rate = 44100
chunk = int(rate/10)

"""
Creates a media config with the settings set for a raw microphone input
"""
example_mc = MediaConfig('audio/x-raw', 'interleaved', 44100, 'S16LE', 1)

streamclient = RevAiStreamingClient(access_token, example_mc)

"""
Opens microphone input. The input will stop after a keyboard interrupt.
"""
with MicrophoneStream(rate, chunk) as stream:
    """
    Uses try method to enable users to manually close the stream
    """
    try:
        """
        Starts the server connection and thread sending microphone audio
        """
        response_gen = streamclient.start(stream.generator())

        """
        Iterates through responses and prints them
        """
        for response in response_gen:
            print(response)

    except KeyboardInterrupt:
        """
        Ends the WebSocket connection.
        """
        streamclient.end()
        pass

Java

Stream a local file

attention

This example uses the Rev AI Java SDK.

The following example can be used to configure a streaming client, stream audio from a file, and obtain the transcript as the audio is processed.

To use this example, replace the <FILEPATH> placeholder with the path to the file you wish to transcribe and the <REVAI_ACCESS_TOKEN> placeholder with your Rev AI account's access token.

Copy
Copied
import ai.rev.speechtotext.RevAiWebSocketListener;
import ai.rev.speechtotext.SessionConfig;
import ai.rev.speechtotext.models.streaming.StreamContentType;
import ai.rev.speechtotext.models.streaming.SessionConfig;
import ai.rev.speechtotext.models.streaming.ConnectedMessage;
import ai.rev.speechtotext.models.streaming.Hypothesis;
import okhttp3.Response;
import okio.ByteString;

import java.io.BufferedInputStream;
import java.io.DataInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.Arrays;

public class RevAiStreaming {

  public void streamFromLocalFile() throws InterruptedException {
    String accessToken = "<REVAI_ACCESS_TOKEN>";
    String filePath = "<FILEPATH>";

    // Configure the streaming content type
    StreamContentType streamContentType = new StreamContentType();
    streamContentType.setContentType("audio/x-raw"); // audio content type
    streamContentType.setLayout("interleaved"); // layout
    streamContentType.setFormat("S16LE"); // format
    streamContentType.setRate(16000); // sample rate
    streamContentType.setChannels(1); // channels

    // Setup the SessionConfig with any optional parameters
    SessionConfig sessionConfig = new SessionConfig();
    sessionConfig.setMetaData("Streaming from the Java SDK");
    sessionConfig.setFilterProfanity(true);

    // Initialize your client with your access token
    StreamingClient streamingClient = new StreamingClient(accessToken);

    // Initialize your WebSocket listener
    WebSocketListener webSocketListener = new WebSocketListener();

    // Begin the streaming session
    streamingClient.connect(webSocketListener, streamContentType, sessionConfig);

    // Read file from disk
    File file = new File(filePath);

    // Convert file into byte array
    byte[] fileByteArray = new byte[(int) file.length()];
    try (final FileInputStream fileInputStream = new FileInputStream(file)) {
      BufferedInputStream bufferedInputStream = new BufferedInputStream(fileInputStream);
      try (final DataInputStream dataInputStream = new DataInputStream(bufferedInputStream)) {
        dataInputStream.readFully(fileByteArray, 0, fileByteArray.length);
      } catch (IOException e) {
        throw new RuntimeException(e.getMessage());
      }
    } catch (IOException e) {
      throw new RuntimeException(e.getMessage());
    }

    // Set the number of bytes to send in each message
    int chunk = 8000;

    // Stream the file in the configured chunk size
    for (int start = 0; start < fileByteArray.length; start += chunk) {
      streamingClient.sendAudioData(
          ByteString.of(
              ByteBuffer.wrap(
                  Arrays.copyOfRange(
                      fileByteArray, start, Math.min(fileByteArray.length, start + chunk)))));
    }

    // Wait to make sure all responses are received
    Thread.sleep(5000);

    // Close the WebSocket
    streamingClient.close();
  }

  // Your WebSocket listener for all streaming responses
  private static class WebSocketListener implements RevAiWebSocketListener {

    @Override
    public void onConnected(ConnectedMessage message) {
      System.out.println(message);
    }

    @Override
    public void onHypothesis(Hypothesis hypothesis) {
      System.out.println(hypothesis.toString());
    }

    @Override
    public void onError(Throwable t, Response response) {
      System.out.println(response);
    }

    @Override
    public void onClose(int code, String reason) {
      System.out.println(reason);
    }

    @Override
    public void onOpen(Response response) {
      System.out.println(response.toString());
    }
  }
}