Node Code Samples
Use the Node code samples below to quickly get started developing with the Rev AI APIs.
Submit a local file for transcription
attention
This example uses the Rev AI Node SDK.
The following example demonstrates how to submit a local audio file for transcription.
To use this example, replace the <FILEPATH>
placeholder with the path to the file you wish to transcribe and the <REVAI_ACCESS_TOKEN>
placeholder with your Rev AI account's access token.
// create a client
import { RevAiApiClient } from 'revai-node-sdk';
var accessToken = '<REVAI_ACCESS_TOKEN>';
var filePath = '<FILEPATH>';
// initialize the client with your access token
var client = new RevAiApiClient(accessToken);
// submit a local file
var job = await client.submitJobLocalFile(filePath);
// retrieve transcript
// as plain text
var transcriptText = await client.getTranscriptText(job.id);
// or as an object
var transcriptObject = await client.getTranscriptObject(job.id);
Submit a remote file for transcription
attention
This example uses the Rev AI Node SDK.
The following example demonstrates how to submit a remote audio file for transcription.
To use this example, replace the <URL>
placeholder with the public URL to the file you wish to transcribe and the <REVAI_ACCESS_TOKEN>
placeholder with your Rev AI account's access token.
import { RevAiApiClient } from 'revai-node-sdk';
var accessToken = '<REVAI_ACCESS_TOKEN>';
var sourceConfig = {url: '<URL>', auth_headers: null};
const jobOptions = {source_config: sourceConfig}
// initialize the client with your access token
var client = new RevAiApiClient(accessToken);
// submit via a public URL
var job = await client.submitJob(jobOptions);
// retrieve transcript
// as plain text
var transcriptText = await client.getTranscriptText(job.id);
// or as an object
var transcriptObject = await client.getTranscriptObject(job.id);
Calculate the average confidence score of a transcript
attention
This example uses the json-query package.
The following example demonstrates how to calculate the average confidence score of a transcript.
To use this example, replace the <FILEPATH>
placeholder with the path to the transcript file (in JSON format).
// import required modules
const fs = require('fs');
const jsonQuery = require('json-query');
// define path to transcript JSON file
const transcriptFile = '<FILEPATH>';
// read file contents
// retrieve array with elements consisting of each {type: text} token
// as confidence scores are only available for text tokens
const transcript = JSON.parse(fs.readFileSync(transcriptFile));
const elements = jsonQuery('monologues.elements[**][*type=text]', {data: transcript}).value
// iterate over array
// calculate and print average confidence
var count = 0;
var confidenceSum = 0;
var confidenceAverage = 0;
elements.forEach(element => {
confidenceSum += element.confidence;
count++;
})
confidenceAverage = confidenceSum / count;
console.log(`Average confidence over ${count} items: ${confidenceAverage}`);
Submit an audio stream for transcription
attention
This example uses the Rev AI Node SDK.
The following example demonstrates how to submit an audio stream for transcription.
To use this example, replace the <FILEPATH>
placeholder with the path to the file you wish to transcribe and the <REVAI_ACCESS_TOKEN>
placeholder with your Rev AI account's access token.
import { RevAiApiClient } from 'revai-node-sdk';
var accessToken = '<REVAI_ACCESS_TOKEN>';
var filePath = '<FILEPATH>';
// initialize the client with your access token
var client = new RevAiApiClient(accessToken);
// submit as audio data, the filename is optional
const stream = fs.createReadStream(filePath);
var job = await client.submitJobAudioData(stream, 'file.mp3');
// retrieve transcript
// as plain text
var transcriptText = await client.getTranscriptText(job.id);
// or as an object
var transcriptObject = await client.getTranscriptObject(job.id);
Stream a local file
attention
This example uses the Rev AI Node SDK.
The following example can be used to configure a streaming client, stream audio from a file, and obtain the transcript as the audio is processed.
To use this example, replace the <FILEPATH>
placeholder with the path to the file you wish to transcribe and the <REVAI_ACCESS_TOKEN>
placeholder with your Rev AI account's access token.
const revai = require('revai-node-sdk');
const fs = require('fs');
const token = '<REVAI_ACCESS_TOKEN>';
const filePath = '<FILEPATH>';
// Initialize your client with your audio configuration and access token
const audioConfig = new revai.AudioConfig(
/* contentType */ "audio/x-raw",
/* layout */ "interleaved",
/* sample rate */ 16000,
/* format */ "S16LE",
/* channels */ 1
);
var client = new revai.RevAiStreamingClient(token, audioConfig);
// Create your event responses
client.on('close', (code, reason) => {
console.log(`Connection closed, ${code}: ${reason}`);
});
client.on('httpResponse', code => {
console.log(`Streaming client received http response with code: ${code}`);
})
client.on('connectFailed', error => {
console.log(`Connection failed with error: ${error}`);
})
client.on('connect', connectionMessage => {
console.log(`Connected with message: ${connectionMessage}`);
})
// Begin streaming session
var stream = client.start();
// Read file from disk
var file = fs.createReadStream(filePath);
stream.on('data', data => {
console.log(data);
});
stream.on('end', function () {
console.log("End of Stream");
});
file.on('end', () => {
client.end();
});
// Stream the file
file.pipe(stream);
// Forcibly ends the streaming session
// stream.end();
Stream and transcribe microphone audio
attention
This example uses the Rev AI Node SDK and the mic package.
The following example can be used to configure your streaming client, send audio as a stream from your microphone input, and obtain the transcript as it is processed.
To use this example, replace the <REVAI_ACCESS_TOKEN>
placeholder with your Rev AI access token.
const revai = require('revai-node-sdk');
const mic = require('mic');
const token = '<REVAI_ACCESS_TOKEN>';
// initialize client with audio configuration and access token
const audioConfig = new revai.AudioConfig(
/* contentType */ "audio/x-raw",
/* layout */ "interleaved",
/* sample rate */ 16000,
/* format */ "S16LE",
/* channels */ 1
);
// initialize microphone configuration
// note: microphone device id differs
// from system to system and can be obtained with
// arecord --list-devices and arecord --list-pcms
const micConfig = {
/* sample rate */ rate: 16000,
/* channels */ channels: 1,
/* device id */ device: 'hw:0,0'
};
var client = new revai.RevAiStreamingClient(token, audioConfig);
var micInstance = mic(micConfig);
// create microphone stream
var micStream = micInstance.getAudioStream();
// create event responses
client.on('close', (code, reason) => {
console.log(`Connection closed, ${code}: ${reason}`);
});
client.on('httpResponse', code => {
console.log(`Streaming client received http response with code: ${code}`);
});
client.on('connectFailed', error => {
console.log(`Connection failed with error: ${error}`);
});
client.on('connect', connectionMessage => {
console.log(`Connected with message: ${connectionMessage}`);
});
micStream.on('error', error => {
console.log(`Microphone input stream error: ${error}`);
});
// begin streaming session
var stream = client.start();
// create event responses
stream.on('data', data => {
console.log(data);
});
stream.on('end', function () {
console.log("End of Stream");
});
// pipe the microphone audio to Rev AI client
micStream.pipe(stream);
// start the microphone
micInstance.start();
// Forcibly ends the streaming session
// stream.end();
Recover from connection errors and timeouts during a stream
attention
This example uses the Rev AI Node SDK.
The following example can be used to configure a streaming client to transcribe a long-duration stream using a RAW-format audio file. It handles reconnects (whether due to session length timeouts or other connectivity interruption) without losing audio. It also re-aligns timestamp offsets to the new streaming session when reconnecting.
To use this example, replace the <FILEPATH>
placeholder with the path to the audio file (RAW format) you wish to stream and the <REVAI_ACCESS_TOKEN>
placeholder with your Rev AI account's access token.
const fs = require('fs');
const revai = require('revai-node-sdk');
const { Writable } = require('stream');
const token = '<REVAI_ACCESS_TOKEN>';
const filePath = '<FILEPATH>';
const bytesPerSample = 2;
const samplesPerSecond = 16000;
const chunkSize = 8000;
// initialize client with audio configuration and access token
const audioConfig = new revai.AudioConfig(
/* contentType */ 'audio/x-raw',
/* layout */ 'interleaved',
/* sample rate */ samplesPerSecond,
/* format */ 'S16LE',
/* channels */ 1
);
// optional config to be provided.
const sessionConfig = new revai.SessionConfig(
metadata='example metadata', /* (optional) metadata */
customVocabularyID=null, /* (optional) custom_vocabulary_id */
filterProfanity=false, /* (optional) filter_profanity */
removeDisfluencies=false, /* (optional) remove_disfluencies */
deleteAfterSeconds=0, /* (optional) delete_after_seconds */
startTs=0, /* (optional) start_ts */
transcriber='machine', /* (optional) transcriber */
detailedPartials=false, /* (optional) detailed_partials */
language="en" /* (optional) language */
);
// begin streaming session
let client = null;
let revaiStream = null;
let audioBackup = [];
let audioBackupCopy = [];
let newStream = true;
let lastResultEndTsReceived = 0.0;
function handleData(data) {
switch (data.type){
case 'connected':
console.log("Received connected");
break;
case 'partial':
console.log(`Partial: ${data.elements.map(x => x.value).join(' ')}`);
break;
case 'final':
console.log(`Final: ${data.elements.map(x => x.value).join('')}`);
const textElements = data.elements.filter(x => x.type === "text");
lastResultEndTsReceived = textElements[textElements.length - 1].end_ts;
console.log(lastResultEndTsReceived * samplesPerSecond * bytesPerSample / 1024);
break;
default:
// all messages from the API are expected to be one of the previous types
console.error('Received unexpected message');
break;
}
}
function startStream() {
client = new revai.RevAiStreamingClient(token, audioConfig);
// create event responses
client.on('close', (code, reason) => {
console.log(`Connection closed, ${code}: ${reason}`);
if (code !== 1000 || reason == 'Reached max session lifetime'){
console.log('Restarting stream');
restartStream();
}
console.log(bytesWritten);
});
client.on('httpResponse', code => {
console.log(`Streaming client received HTTP response with code: ${code}`);
});
client.on('connectFailed', error => {
console.log(`Connection failed with error: ${error}`);
});
client.on('connect', connectionMessage => {
console.log(`Connected with job ID: ${connectionMessage.id}`);
});
audioBackup = [];
sessionConfig.startTs = lastResultEndTsReceived;
revaiStream = client.start(sessionConfig);
revaiStream.on('data', data => {
handleData(data);
});
revaiStream.on('end', function () {
console.log('End of stream');
});
}
let bytesWritten = 0;
const audioInputStreamTransform = new Writable({
write(chunk, encoding, next) {
if (newStream && audioBackupCopy.length !== 0) {
// approximate math to calculate time of chunks
const bitsSent = lastResultEndTsReceived * samplesPerSecond * bytesPerSample;
const chunksSent = Math.floor(bitsSent / chunkSize);
if (chunksSent !== 0) {
for (let i = chunksSent; i < audioBackupCopy.length; i++) {
revaiStream.write(audioBackupCopy[i][0], audioBackupCopy[i][1]);
}
}
newStream = false;
}
audioBackup.push([chunk, encoding]);
if (revaiStream) {
revaiStream.write(chunk, encoding);
bytesWritten += chunk.length;
}
next();
},
final() {
if (client && revaiStream) {
client.end();
revaiStream.end();
}
}
});
function restartStream() {
if (revaiStream) {
client.end();
revaiStream.end();
revaiStream.removeListener('data', handleData);
revaiStream = null;
}
audioBackupCopy = [];
audioBackupCopy = audioBackup;
newStream = true;
startStream();
}
// read file from disk
let file = fs.createReadStream(filePath);
startStream();
file.on('end', () => {
chunkInputTransform.end();
})
// array for data left over from chunking writes into chunks of 8000
let leftOverData = null;
const chunkInputTransform = new Writable({
write(chunk, encoding, next) {
if (encoding !== 'buffer'){
console.log(`${encoding} is not buffer, writing directly`);
audioInputStreamTransform.write(chunk, encoding);
}
else {
let position = 0;
if (leftOverData != null) {
let audioChunk = Buffer.alloc(chunkSize);
const copiedAmount = leftOverData.length;
console.log(`${copiedAmount} left over, writing with next chunk`);
leftOverData.copy(audioChunk);
leftOverData = null;
chunk.copy(audioChunk, chunkSize - copiedAmount);
position += chunkSize - copiedAmount;
audioInputStreamTransform.write(audioChunk, encoding);
}
while(chunk.length - position > chunkSize) {
console.log(`${chunk.length - position} bytes left in chunk, writing with next audioChunk`);
let audioChunk = Buffer.alloc(chunkSize);
chunk.copy(audioChunk, 0, position, position+chunkSize);
position += chunkSize;
audioInputStreamTransform.write(audioChunk, encoding);
}
if (chunk.length > 0) {
leftOverData = Buffer.alloc(chunk.length - position);
chunk.copy(leftOverData, 0, position);
}
}
next();
},
final() {
if (leftOverData != null) {
audioInputStreamTransform.write(leftOverData);
audioInputStreamTransform.end();
}
}
})
// stream the file
file.pipe(chunkInputTransform);
Send email notifications using a webhook
attention
This example uses the Rev AI Node SDK, the Twilio SendGrid Node package and the Express framework.
The following example demonstrates how to implement a webhook handler that receives and parses the HTTP POST message from the Rev AI API and sends an email notification using Express and the Twilio SendGrid API client.
To use this example, you must first replace three placeholders:
-
<SENDER_EMAIL_ADDRESS>
and<RECIPIENT_EMAIL_ADDRESS>
for the sender and recipient email addresses; and -
<SENDGRID_API_KEY>
for the Twilio SendGrid API key.
const bodyParser = require('body-parser');
const express = require('express');
const sendgrid = require('@sendgrid/mail');
// Twilio SendGrid API key
const sendgridKey = '<SENDGRID_API_KEY>';
// sender email address
const senderEmail = '<SENDER_EMAIL>';
// recipient email address
const receiverEmail = '<RECEIVER_EMAIL>';
// set API key for SendGrid
sendgrid.setApiKey(sendgridKey);
// create Express application
const app = express();
app.use(bodyParser.json());
// handle requests to webhook endpoint
app.post('/hook', async req => {
const job = req.body.job;
console.log(`Received status for job id ${job.id}: ${job.status}`);
const message = {
from: senderEmail,
to: receiverEmail,
subject: `Job ${job.id} is COMPLETE`,
text: job.status === 'transcribed'
? `Log in at https://rev.ai/jobs/speech-to-text/ to collect your transcript.`
: `An error occurred. Log in at https://rev.ai/jobs/speech-to-text/ to view details.`
};
try {
await sendgrid.send(message);
console.log('Email successfully sent');
} catch (e) {
console.error(e);
}
});
// start application on port 3000
app.listen(3000, () => {
console.log('Webhook listening');
})
Save transcripts to MongoDB using a webhook
attention
This example uses the Rev AI Node SDK, the MongoDB Node.js Driver and the Express framework.
The following example demonstrates how to implement a webhook handler that receives and parses the HTTP POST message from the Rev AI API and then makes a subsequent request to the API to retrieve the complete transcript. The handler then saves the received data to a MongoDB database collection as a JSON document.
To use this example, you must replace the <MONGODB_CONNECTION_URI>
with the connection URI to your MongoDB database and the <REVAI_ACCESS_TOKEN>
placeholder with your Rev AI account's access token.
const { RevAiApiClient } = require('revai-node-sdk');
const { MongoClient } = require('mongodb');
const bodyParser = require('body-parser');
const express = require('express');
// MongoDB connection string
const mongodbUri = '<MONGODB_CONNECTION_URI>';
// Rev AI access token
const revAiToken = '<REVAI_ACCESS_TOKEN>';
// create Express application
const app = express();
app.use(bodyParser.json());
// create Mongo client
const mongo = new MongoClient(mongodbUri);
mongo.connect();
const db = mongo.db('mydb');
const transcripts = db.collection('transcripts')
// create Rev AI API client
const revAiClient = new RevAiApiClient(revAiToken);
// handle requests to webhook endpoint
app.post('/hook', async req => {
const job = req.body.job;
console.log(`Received status for job id ${job.id}: ${job.status}`);
if (job.status === 'transcribed') {
// request transcript
const transcript = await revAiClient.getTranscriptObject(job.id);
console.log(`Received transcript for job id ${job.id}`);
// create MongoDB document
const doc = {
job_id: job.id,
created_on: job.created_on,
language: job.language,
status: job.status,
transcript
}
// save document to MongoDB
try {
const result = await collection.insertOne(doc);
console.log(`Saved transcript with document id: ${result.insertedId}`);
} catch (e) {
console.error(e);
}
}
});
// start application on port 3000
app.listen(3000, () => {
console.log('Webhook listening');
})
Identify language for transcription using a webhook
attention
This example uses the Rev AI Node SDK and the Express framework.
The following example demonstrates a webhook handler that receives both language identification and transcription job results from the respective APIs. If the results are successful, it performs the following additional processing:
- For language identification jobs, it obtains the list of identified languages and the most probable language, and then initiates an asynchronous transcription request that includes this language information.
- For asynchronous transcription jobs, it obtains the final transcript and prints it to the console.
To use this example, replace the <REVAI_ACCESS_TOKEN>
placeholder with your Rev AI account's access token.
const { RevAiApiClient } = require('revai-node-sdk');
const bodyParser = require('body-parser');
const express = require('express');
const axios = require('axios');
const token = '<REVAI_ACCESS_TOKEN>';
// create Axios client
const http = axios.create({
baseURL: 'https://api.rev.ai/',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
}
});
// create Rev AI API client
const revAiClient = new RevAiApiClient(token);
const getLanguageIdentificationJobResult = async (jobId) => {
return await http.get(`languageid/v1beta/jobs/${jobId}/result`,
{ headers: { 'Accept': 'application/vnd.rev.languageid.v1.0+json' } })
.then(response => response.data)
.catch(console.error);
};
// create Express application
const app = express();
app.use(bodyParser.json());
// define webhook handler
app.post('/hook', async req => {
// get job, media URL, callback URL
const job = req.body.job;
const fileUrl = job.media_url;
const callbackUrl = job.callback_url;
console.log(`Received status for job id ${job.id}: ${job.status}`);
try {
switch (job.type) {
// language job result handler
case 'language_id':
if (job.status === 'completed') {
const languageJobResult = await getLanguageIdentificationJobResult(job.id);
// retrieve most probable language
// use as input to transcription request
const languageId = languageJobResult.top_language;
console.log(`Received result for job id ${job.id}: language '${languageId}'`);
const transcriptJobSubmission = await revAiClient.submitJobUrl(fileUrl, {
language: languageId,
callback_url: callbackUrl
});
console.log(`Submitted for transcription with job id ${transcriptJobSubmission.id}`);
}
break;
// transcription job result handler
case 'async':
if (job.status === 'transcribed') {
// retrieve transcript
const transcriptJobResult = await revAiClient.getTranscriptObject(job.id);
console.log(`Received transcript for job id ${job.id}`);
// do something with transcript
// for example: print to console
console.log(transcriptJobResult);
}
break;
}
} catch (e) {
console.error(e);
}
});
// start application on port 3000
app.listen(3000, () => {
console.log('Webhook listening');
})
Submit JSON data for topic extraction
attention
This example uses the Axios HTTP client.
The following example demonstrates how to submit a JSON transcript for topic extraction using the Axios HTTP client.
To use this example, set the <REVAI_ACCESS_TOKEN>
variable to your Rev AI account's access token.
const axios = require('axios');
const token = '<REVAI_ACCESS_TOKEN>';
// create a client
const http = axios.create({
baseURL: 'https://api.rev.ai/topic_extraction/v1/',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
}
});
// submit a POST request
const submitTopicExtractionJobJson = async (jsonData) => {
return await http.post(`jobs`,
JSON.stringify({
json: jsonData
}))
.then(response => response.data)
.catch(console.error);
};
Submit plaintext data for topic extraction
attention
This example uses the Axios HTTP client.
The following example demonstrates how to submit a plaintext transcript for topic extraction using the Axios HTTP client.
To use this example, set the <REVAI_ACCESS_TOKEN>
variable to your Rev AI account's access token.
const axios = require('axios');
const token = '<REVAI_ACCESS_TOKEN>';
// create a client
const http = axios.create({
baseURL: 'https://api.rev.ai/topic_extraction/v1/',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
}
});
// submit a POST request
const submitTopicExtractionJobText = async (textData) => {
return await http.post(`jobs`,
JSON.stringify({
text: textData
}))
.then(response => response.data)
.catch(console.error);
};
Check the status of a topic extraction job
attention
This example uses the Axios HTTP client.
The following example demonstrates how to retrieve the status of a topic extraction job using the Axios HTTP client.
To use this example, set the <REVAI_ACCESS_TOKEN>
variable to your Rev AI account's access token.
const axios = require('axios');
const token = '<REVAI_ACCESS_TOKEN>';
// create a client
const http = axios.create({
baseURL: 'https://api.rev.ai/topic_extraction/v1/',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
}
});
// submit a GET request
const getTopicExtractionJobStatus = async (jobId) => {
return await http.get(`jobs/${jobId}`)
.then(response => response.data)
.catch(console.error);
};
Retrieve a topic extraction report
attention
This example uses the Axios HTTP client.
The following example demonstrates how to retrieve the result of a topic extraction job using the Axios HTTP client.
To use this example, set the <REVAI_ACCESS_TOKEN>
variable to your Rev AI account's access token.
const axios = require('axios');
const token = '<REVAI_ACCESS_TOKEN>';
// create a client
const http = axios.create({
baseURL: 'https://api.rev.ai/topic_extraction/v1/',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
}
});
// submit a GET request
const getTopicExtractionJobResult = async (jobId) => {
return await http.get(`jobs/${jobId}/result`,
{ headers: { 'Accept': 'application/vnd.rev.topic.v1.0+json' } })
.then(response => response.data)
.catch(console.error);
};
Submit JSON data for sentiment analysis
attention
This example uses the Axios HTTP client.
The following example demonstrates how to submit a JSON transcript for sentiment analysis using the Axios HTTP client.
To use this example, set the <REVAI_ACCESS_TOKEN>
placeholder to your Rev AI account's access token.
const axios = require('axios');
const token = '<REVAI_ACCESS_TOKEN>';
// create a client
const http = axios.create({
baseURL: 'https://api.rev.ai/sentiment_analysis/v1/',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
}
});
// submit a POST request
const submitSentimentAnalysisJobJson = async (jsonData) => {
return await http.post(`jobs`,
JSON.stringify({
json: jsonData
}))
.then(response => response.data)
.catch(console.error);
};
Submit plaintext data for sentiment analysis
attention
This example uses the Axios HTTP client.
The following example demonstrates how to submit a plaintext transcript for sentiment analysis using the Axios HTTP client.
To use this example, set the <REVAI_ACCESS_TOKEN>
placeholder to your Rev AI account's access token.
const axios = require('axios');
const token = '<REVAI_ACCESS_TOKEN>';
// create a client
const http = axios.create({
baseURL: 'https://api.rev.ai/sentiment_analysis/v1/',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
}
});
// submit a POST request
const submitSentimentAnalysisJobText = async (textData) => {
return await http.post(`jobs`,
JSON.stringify({
text: textData
}))
.then(response => response.data)
.catch(console.error);
};
Check the status of a sentiment analysis job
attention
This example uses the Axios HTTP client.
The following example demonstrates how to retrieve the status of a sentiment analysis job using the Axios HTTP client.
To use this example, set the <REVAI_ACCESS_TOKEN>
placeholder to your Rev AI account's access token.
const axios = require('axios');
const token = '<REVAI_ACCESS_TOKEN>';
// create a client
const http = axios.create({
baseURL: 'https://api.rev.ai/sentiment_analysis/v1/',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
}
});
// submit a GET request
const getSentimentAnalysisJobStatus = async (jobId) => {
return await http.get(`jobs/${jobId}`)
.then(response => response.data)
.catch(console.error);
};
Retrieve a sentiment analysis report
attention
This example uses the Axios HTTP client.
The following example demonstrates how to retrieve the result of a sentiment analysis job using the Axios HTTP client.
To use this example, set the <REVAI_ACCESS_TOKEN>
placeholder to your Rev AI account's access token.
const axios = require('axios');
const token = '<REVAI_ACCESS_TOKEN>';
// create a client
const http = axios.create({
baseURL: 'https://api.rev.ai/sentiment_analysis/v1/',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json'
}
});
// submit a GET request
const getSentimentAnalysisJobResult = async (jobId) => {
return await http.get(`jobs/${jobId}/result`,
{ headers: { 'Accept': 'application/vnd.rev.sentiment.v1.0+json' } })
.then(response => response.data)
.catch(console.error);
};