Changelog
Follow along to see weekly accuracy and product improvements.
Improvements - observability, logging, and patches
We have improved logging for our LeMUR service to allow for the surfacing of more detailed errors to users.
We have increased observability into our Speech API internally, allowing for finer grained metrics of usage.
We have fixed a minor bug that would sometimes lead to incorrect timestamps for zero-confidence words.
We have fixed an issue in which requests to LeMUR would occasionally hang during peak usage due to a memory leak issue.
Multi-language speaker labels
We have recently launched Speaker Labels for 10 additional languages:
- Spanish
- Portuguese
- German
- Dutch
- Finnish
- French
- Italian
- Polish
- Russian
- Turkish
Audio Intelligence unbundling and price decreases
We have unbundled and lowered the price for our Audio Intelligence models. Previously, the bundled price for all Audio Intelligence models was $2.10/hr, regardless of the number of models used.
We have made each model accessible at a lower, unbundled, per-model rate:
- Auto chapters: $0.30/hr
- Content Moderation: $0.25/hr
- Entity detection: $0.15/hr
- Key Phrases: $0.06/hr
- PII Redaction: $0.20/hr
- Audio Redaction: $0.05/hr
- Sentiment analysis: $0.12/hr
- Summarization: $0.06/hr
- Topic detection: $0.20/hr
New language support and improvements to existing languages
We now support the following additional languages for asynchronous transcription through our /v2/transcript
endpoint:
- Chinese
- Finnish
- Korean
- Polish
- Russian
- Turkish
- Ukrainian
- Vietnamese
Additionally, we've made improvements in accuracy and quality to the following languages:
- Dutch
- French
- German
- Italian
- Japanese
- Portuguese
- Spanish
You can see a full list of supported languages and features here. You can see how to specify a language in your API request here. Note that not all languages support Automatic Language Detection.
Pricing decreases
We have decreased the price of Core Transcription from $0.90 per hour to $0.65 per hour, and decreased the price of Real-Time Transcription from $0.90 per hour to $0.75 per hour.
Both decreases were effective as of August 3rd.
Significant Summarization model speedups
We’ve implemented changes that yield between a 43% to 200% increase in processing speed for our Summarization models, depending on which model is selected, with no measurable impact on the quality of results.
We have standardized the response from our API for automatically detected languages that do not support requested features. In particular, when Automatic Language Detection is used and the detected language does not support a feature requested in the transcript request, our API will return null
in the response for that feature.
Introducing LeMUR, the easiest way to build LLM apps on spoken data
We've released LeMUR - our framework for applying LLMs to spoken data - for general availability. LeMUR is optimized for high accuracy on specific tasks:
- Custom Summary allows users to automatically summarize files in a flexible way
- Question & Answer allows users to ask specific questions about audio files and receive answers to these questions
- Action Items allows users to automatically generate a list of action items from virtual or in-person meetings
Additionally, LeMUR can be applied to groups of transcripts in order to simultaneously analyze a set of files at once, allowing users to, for example, summarize many podcast episode or ask questions about a series of customer calls.
Our Python SDK allows users to work with LeMUR in just a few lines of code:
# version 0.15 or greater
import assemblyai as aai
# set your API key
aai.settings.api_key = f"{API_TOKEN}"
# transcribe the audio file (meeting recording)
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://storage.googleapis.com/aai-web-samples/meeting.mp4")
# generate and print action items
result = transcript.lemur.action_items(
context="A GitLab meeting to discuss logistics",
answer_format="**<topic header>**\n<relevant action items>\n",
)
print(result.response)
Learn more about LeMUR in our blog post, or jump straight into the code in our associated Colab notebook.
Introducing our Conformer-2 model
We've released Conformer-2, our latest AI model for automatic speech recognition. Conformer-2 is trained on 1.1M hours of English audio data, extending Conformer-1 to provide improvements on proper nouns, alphanumerics, and robustness to noise.
Conformer-2 is now the default model for all English audio files sent to the v2/transcript
endpoint for async processing and introduces no breaking changes.
We’ll be releasing Conformer-2 for real-time English transcriptions within the next few weeks.
Read our full blog post about Conformer-2 here. You can also try it out in our Playground.
New parameter and timestamps fix
We’ve introduced a new, optional speech_threshold
parameter, allowing users to only transcribe files that contain at least a specified percentage of spoken audio, represented as a ratio in the range [0, 1]
.
You can use the speech_threshold
parameter with our Python SDK as below:
import assemblyai as aai
aai.settings.api_key = f"{ASSEMBLYAI_API_KEY}"
config = aai.TranscriptionConfig(speech_threshold=0.1)
file_url = "https://github.com/AssemblyAI-Examples/audio-examples/raw/main/20230607_me_canadian_wildfires.mp3"
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(file_url, config)
print(transcript.text)
Smoke from hundreds of wildfires in Canada is triggering air quality alerts throughout the US. Skylines from ...
If the percentage of speech in the audio file does not meet or surpass the provided threshold, then the value of transcript.text
will be None
and you will receive an error:
if not transcript.text:
print(transcript.error)
Audio speech threshold 0.9461 is below the requested speech threshold value 1.0
As usual, you can also include the speech_threshold
parameter in the JSON of raw HTTP requests for any language.
We’ve fixed a bug in which timestamps could sometimes be incorrectly reported for our Topic Detection and Content Safety models.
We’ve made improvements to detect and remove a hallucination that would sometimes occur with specific audio patterns.
Character sequence improvements
We’ve fixed an issue in which the last character in an alphanumeric sequence could fail to be transcribed. The fix is effective immediately and constitutes a 95% reduction in errors of this type.
We’ve fixed an issue in which consecutive identical numbers in a long number sequence could fail to be transcribed. This fix is effective immediately and constitutes a 66% reduction in errors of this type.
Speaker Labels improvement
We’ve made improvements to the Speaker Labels model, adjusting the impact of the speakers_expected
parameter to better allow the model to determine the correct number of unique speakers, especially in cases where one or more speakers talks substantially less than others.
We’ve expanded our caching system to include additional third-party resources to help further ensure our continued operations in the event of external resources being down.
Significant processing time improvement
We’ve made significant improvements to our transcoding pipeline, resulting in a 98% overall speedup in transcoding time and a 12% overall improvement in processing time for our asynchronous API.
We’ve implemented a caching system for some third-party resources to ensure our continued operations in the event of external resources being down.
Announcing LeMUR - our new framework for applying powerful LLMs to transcribed speech
We’re introducing our new framework LeMUR, which makes it simple to apply Large Language Models (LLMs) to transcripts of audio files up to 10 hours in length.
LLMs unlock a range of impressive capabilities that allow teams to build powerful Generative AI features. However, building these features is difficult due to the limited context windows of modern LLMs, among other challenges that necessitate the development of complicated processing pipelines.
LeMUR circumvents this problem by making it easy to apply LLMs to transcribed speech, meaning that product teams can focus on building differentiating Generative AI features rather than focusing on building infrastructure. Learn more about what LeMUR can do and how it works in our announcement blog, or jump straight to trying LeMUR in our Playground.
New PII and Entity Detection Model
We’ve upgraded to a new and more accurate PII Redaction model, which improves credit card detections in particular.
We’ve made stability improvements regarding the handling and caching of web requests. These improvements additionally fix a rare issue with punctuation detection.
Multilingual and stereo audio fixes, & Japanese model retraining
We’ve fixed two edge cases in our async transcription pipeline that were producing non-deterministic results from multilingual and stereo audio.
We’ve improved word boundary detection in our Japanese automatic speech recognition model. These changes are effective immediately for all Japanese audio files submitted to AssemblyAI.
Decreased latency and improved password reset
We’ve implemented a range of improvements to our English pipeline, leading to an average 38% improvement in overall latency for asynchronous English transcriptions.
We’ve made improvements to our password reset process, offering greater clarity to users attempting to reset their passwords while still ensuring security throughout the reset process.
Conformer-1 now available for Real-Time transcription, new Speaker Labels parameter, and more
We're excited to announce that our new Conformer-1 Speech Recognition model is now available for real-time English transcriptions, offering a 24.3% relative accuracy improvement.
Effective immediately, this state-of-the-art model will be the default model for all English audio data sent to the wss://api.assemblyai.com/v2/realtime/ws
WebSocket API.
The Speaker Labels model now accepts a new optional parameter called speakers_expected
. If you have high confidence in the number of speakers in an audio file, then you can specify it with speakers_expected
in order to improve Speaker Labels performance, particularly for short utterances.
TLS 1.3 is now available for use with the AssemblyAI API. Using TLS 1.3 can decrease latency when establishing a connection to the API.
Our PII redaction scaling has been improved to increase stability, particularly when processing longer files.
We've improved the quality and accuracy of our Japanese model.
Short transcripts that are unable to be summarized will now return an empty summary and a successful transcript.
Introducing our Conformer-1 model
We've released our new Conformer-1 model for speech recognition. Conformer-1 was trained on 650K hours of audio data and is our most accurate model to date.
Conformer-1 is now the default model for all English audio files sent to the /v2/transcript
endpoint for async processing.
We'll be releasing it for real-time English transcriptions within the next two weeks, and will add support for more languages soon.
New AI Models for Italian / Japanese Punctuation Improvements
Our Content Safety and Topic Detection models are now available for use with Italian audio files.
We’ve made improvements to our Japanese punctuation model, increasing relative accuracy by 11%. These changes are effective immediately for all Japanese audio files submitted to AssemblyAI.
Hindi Punctuation Improvements
We’ve made improvements to our Hindi punctuation model, increasing relative accuracy by 26%. These changes are effective immediately for all Hindi audio files submitted to AssemblyAI.
We’ve tuned our production infrastructure to reduce latency and improve overall consistency when using the Topic Detection and Content Moderation models.