Integration of Huawei ML Kit Text To Speech [TTS] — Listen to your story in Flutter StoryApp

Shiddalingeshwar M S
4 min readMar 4, 2022

--

Introduction

In this article, we will be integrating Huawei ML kit in Flutter StoryApp to listen stories using ML kit Text To Speech (TTS). ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you to develop various AI apps. ML Kit allows your apps to easily leverage Huawei’s long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries.

In this flutter sample application, we are using Language/Voice-related services, services are as follows.

  • Real-time translation: Translates text from the source language into the target language through the server on the cloud.
  • On-device translation: Translates text from the source language into the target language with the support of an on-device model, even when no Internet service is available.
  • Real-time language detection: Detects the language of text online. Both single-language text and multi-language text are supported.
  • On-device language detection: Detects the language of text without Internet connection. Both single-language text and multi-language text are supported.
  • Automatic speech recognition: Converts speech (no longer than 60 seconds) into text in real time.
  • Automatic speech recognition: Converts speech (no longer than 60 seconds) into text in real time.
  • Text to speech: Converts text information into audio output online in real time. Rich timbres, and volume and speed options are supported to produce more natural sounds.
  • On-device text to speech: Converts text information into speech with the support of an on-device model, even when there is no Internet connection.
  • Audio file transcription: Converts an audio file (no longer than 5 hours) into text. The generated text contains punctuation and timestamps. Currently, the service supports Chinese and English.
  • Real-time transcription: Converts speech (no longer than 5 hours) into text in real time. The generated text contains punctuation and timestamps.
  • Sound detection: Detects sound events in online (real-time recording) mode. The detected sound events can help you perform subsequent actions.

Supported Devices

Supported devices list
Key notes to be remembered while implementing ML TTS

Development Overview

You need to install Flutter and Dart plugin in IDE and I assume that you have prior knowledge about the Flutter and Dart.

Hardware Requirements

  • A computer (desktop or laptop) running Windows 10.
  • Android phone (with the USB cable), which is used for debugging.

Software Requirements

  • Java JDK 1.7 or later.
  • Android studio software or Visual Studio or Code installed.
  • HMS Core (APK) 4.X or later.

Integration process

  • Step 1: Create Flutter project.
Selecting new flutter project and sdk path
Creating project and adding app details

Step 2: Add the App level gradle dependencies.

Choose inside project Android > app > build.gradle.

Root level gradle dependencies

Step 3: Add the below permissions in Android Manifest file.

Step 4: Download flutter plugins

Step 5: Add downloaded file into parent directory of the project. Declare plugin path in pubspec.yaml file under dependencies. Add path location for asset image.

Adding images in pubspec.yaml
Adding plugin path in pubspec,yaml

Let’s start coding

Login screen code, login with Huawei ID
In this screen you will see list of offline articles
In this screen you will see, story details and you can listen to the same story.

Result

StoryApp final result, where you can login, list all the stories and see details of story and you can listen to it.

Tricks and Tips

  • Make sure that downloaded plugin is unzipped in parent directory of project.
  • Makes sure that agconnect-services.json file added.
  • Make sure dependencies are added yaml file.
  • Run flutter pug get after adding dependencies.
  • Make sure that service is enabled in agc.
  • Makes sure images are defined in yaml file.
  • Make sure that permissions are added in Manifest file.

Conclusion

In this article, we have learnt how to integrate Huawei ML kit Text to Speech in Flutter StoryApp. It supports maximum of 500 character for one request to convert Text to Speech. Once Account kit integrated, users can login quickly and conveniently sign in to apps with their Huawei IDs after granting initial access permission. Banner and Splash Ads helps you to monetize your StoryApp.

Thank you so much for reading, and also I would like to ‘thanks author for write-ups’. I hope this article helps you to understand the integration of Huawei ML Kit, Banner and Splash Ads in flutter StoryApp.

Reference

ML Kit Text To Speech

StoryAuthors

Account Kit — Training Video

ML Kit — Training Video

Checkout in forum

--

--