.Make sure compatibility along with numerous structures, including.NET 6.0,. NET Framework 4.6.2, and.NET Specification 2.0 as well as above.Decrease reliances to prevent variation disputes and the need for binding redirects.Translating Audio Record.Among the primary functions of the SDK is actually audio transcription. Programmers can easily transcribe audio documents asynchronously or even in real-time. Below is an instance of exactly how to translate an audio documents:.utilizing AssemblyAI.making use of AssemblyAI.Transcripts.var customer = brand new AssemblyAIClient(" YOUR_API_KEY").var transcript = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For local area files, identical code could be used to achieve transcription.await using var stream = brand new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK likewise holds real-time sound transcription utilizing Streaming Speech-to-Text. This attribute is particularly helpful for requests needing prompt handling of audio data.utilizing AssemblyAI.Realtime.wait for utilizing var transcriber = brand-new RealtimeTranscriber( brand new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Final: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for acquiring sound coming from a microphone as an example.GetAudio( async (portion) => wait for transcriber.SendAudioAsync( part)).wait for transcriber.CloseAsync().Making Use Of LeMUR for LLM Applications.The SDK includes with LeMUR to make it possible for developers to create big language style (LLM) functions on vocal records. Listed below is actually an example:.var lemurTaskParams = new LemurTaskParams.Motivate="Deliver a short conclusion of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var reaction = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Intelligence Designs.Additionally, the SDK features built-in support for audio knowledge models, allowing feeling analysis and other sophisticated features.var records = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = correct. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more details, explore the main AssemblyAI blog.Image source: Shutterstock.