Acapela Group. Acapela TTS for ios. Technical Documentation. Acapela Group

Acapela Group Acapela TTS for iOS Technical Documentation Acapela Group www.acapela-group.com Version Date Subject Author(s) 0.1 0.2 0.2 0.3 ...
Author: Teresa Hampton
8 downloads 3 Views 300KB Size
Acapela Group

Acapela TTS for iOS Technical Documentation

Acapela Group www.acapela-group.com

Version

Date

Subject

Author(s)

0.1 0.2 0.2 0.3

27/08/13 25/06/14 04/10/15 03/05/16

Acapela for iOS 1.500 Acapela for iOS 1.600 Acapela for iOS 1.700 Acapela for iOS 1.800

SB SB SB SB

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 3/38

INTRODUCTION

6

1. TTS TECHNOLOGY

7

2.1. TEXT PROCESSOR 2.2. SPEECH SYNTHESIZERS TECHNOLOGY

7 7

3. BUNDLING YOUR APP

8

3.1. LICENSE 3.2. VOICES 3.3. LIBRARIES 3.4. SPECIAL FILES

8 8 9 9

4. IMPLEMENTATION

10

4.1. ACAPELASPEECH CLASS 4.2. REFERENCE LIST 4.3. INSTANCE METHODS 4.3.1. LOADVOICE 4.3.2. STARTSPEAKINGSTRING 4.3.3. STARTSPEAKINGSTRING:TOURL 4.3.4. STARTSPEAKINGSTRINGSYNC:TOURL 4.3.5. QUEUESPEAKINGSTRING 4.3.6. QUEUESPEAKINGSTRING:TEXTINDEX 4.3.7. STOPSPEAKING 4.3.8. STOPSPEAKINGATBOUNDARY 4.3.9. PAUSESPEAKINGATBOUNDARY 4.3.10. CONTINUESPEAKING 4.3.11. SKIPTONEXTTEXT 4.3.12. VOICE 4.3.13. SETVOICE 4.3.14. RATE 4.3.15. SETRATE 4.3.16. VOLUME 4.3.17. SETVOLUME 4.3.18. SELBREAK 4.3.19. SETSELBREAK 4.3.20. VOICESHAPING 4.3.21. SETVOICESHAPING 4.3.22. AUDIOBOOST 4.3.23. SETAUDIOBOOST 4.3.24. OBJECTFORPROPERTY:ERROR 4.3.25. DELEGATE 4.3.26. SETDELEGATE 4.3.27. ISSPEAKING 4.3.28. ISPAUSED 4.3.29. AUDIOPEAKLEVEL 4.3.30. AUDIOAVERAGELEVEL 2.1.1. GENERATEAUDIOFILE

10 10 12 12 12 13 13 14 14 15 15 15 15 16 16 16 16 17 17 18 18 18 18 19 19 19 20 20 20 20 21 21 21 21

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 4/38

3. CLASS METHODS

22

3.1.1. ISANYAPPLICATIONSPEAKING 3.1.2. AVAILABLEVOICES 3.1.3. ATTRIBUTESFORVOICE 3.1.4. ATTRIBUTESFORCURRENTVOICES 3.1.5. SETVOICESDIRECTORYARRAY 3.1.6. REFRESHVOICELIST 3.2. DELEGATE METHODS 3.2.1. SPEECHSYNTHESIZER:DIDFINISHSPEAKING 3.2.2. SPEECHSYNTHESIZER:DIDFINISHSPEAKING:TEXTINDEX 3.2.3. SPEECHSYNTHESIZER:WILLSPEAKWORD 3.2.4. SPEECHSYNTHESIZER:WILLSPEAKVISEME

22 22 22 22 23 23 23 23 23 24 24

4. USERDICO METHODS

25

4.1.1. GETUSERDICOSTITLES 4.1.2. ADDUSERDICO 4.1.3. REMOVEUSERDICO 4.1.4. SETUSERDICOENTRY:WORD:NATURE:TRANSCRIPTION 4.1.5. REMOVEUSERDICOENTRY:WORD 4.1.6. GETUSERDICOPATH 4.1.7. CHECKUSERDICOCONTENT 4.1.8. LISTUSERDICOCONTENT 4.1.9. GETPHONEMSLIST 4.1.10. SETUSERDICOPATH 4.1.11. ISPHONETICENTRYVALID 4.1.12. CONVERTDICOTOUTF8 2.1. CONSTANTS

25 25 26 26 26 26 27 27 27 27 28 28 29

3. ERROR CODES

31

4. SYNTHESIZER INFORMATION

32

5. USER DICTIONARY

33

6. TEXT TAGS

35

7. AUDIO MANAGEMENT

36

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Acapela Group www.acapela-group.com

Page : 5/38

Acapela TTS for iOS Documentation

Introduction

This document gives an overview of the Acapela for iPhone SDK. See more information about Acapela TTS for iPhone on http://www.acapela-for-iphone.com/faq See also the FAQ on http://www.acapela-for-iphone.com/faq

Acapela Group www.acapela-group.com

Page : 6/38

Acapela TTS for iOS Documentation

Page : 7/38

1.TTS Technology

Generally speaking, a Text-to-Speech algorithm is composed of two main software modules: ▪ The text processor (Natural Language Processor — NLP) converts the text into a phonetic representation together with required information to generate the appropriate intonation; ▪ The speech synthesizer converts this phonetic representation of text into a speech signal. ▪ These two software modules use databases to perform their assigned operation. All databases are language dependent. Databases used by the speech synthesizer are also language and voice dependent.

2.1.Text Processor

The Text processor performs the analysis of the input text and converts this text into a sequence of phonemes combined with prosodic information. This software code is language independent for most of the Indo-European languages. The databases contain all information specific to the language. The Text Processor module is composed of: ▪ Pre-processor: analyses the text and defines each text unit (word), it extracts the numbers, dates, abbreviations. ▪ Phonetizer and dictionaries: converts each text unit received from the pre-processor into a sequence of phonemes. ▪ Prosody generation: maps the rhythm and the intonation onto the phonetic representation of the text. This block transmits a phonetic string (phonemes, intonation and duration) to the speech synthesizer To ensure a correct pronunciation of special words, access to USER dictionaries is implemented. The User Dictionary brings huge flexibility to the application developer and the final user. It allows the TTS to handle some linguistic info for specific words: proper names, specific professional terminology, abbreviations, acronyms, etc. (see section 4 for more details).

2.2.Speech Synthesizers Technology Speech Synthesizer During the last decades, the quality of TTS systems has moved from a robotic speech to a warm, natural sounding voice. Our wide range of products is based on three technologies: ▪ Diphone concatenation (HD Voices): provides high quality speech synthesis based on technique like the Acapela Group’s patented Multi Band Resynthesis Overlap Add (MBROLA) systems. This approach allows spectral smoothing of the concatenation points, producing a much more natural voice than with other concatenative systems. The voices using this technology are marked with letters HD (High Density). With this technology, only one instance of each speech unit (diphone) is stored in the database and signal processing (MBROLA) is applied to the units to modify its duration and pitch curve. ▪ Unit selection (HQ, HM and LF Voices): a library of pre-recorded human speech units generates a clear and natural-sounding voice, achieving a superb quality level never reached before. This new-generation TTS solution significantly improves intelligibility and listening comfort of speech output. A HQ (High Quality), HM (HQ Medium

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 8/38

size) or LF (HQ Low Footprint version) suffix is added to the name of the voices using a synthesizer based on Unit Selection Speech Synthesis. Here, the approach is to try to find a unit in the database that almost matches the right duration and pitch curve and to apply to it as little as possible signal processing. HQ and HM voices are based on the same technology, but the voice database size of a HM voice is smaller than a HQ voice's by using a bigger compression rate. So, the audio quality of HQ voices is better than for HM. LF voices are also based on unit selection, but are using the same compression rate as the HM voices and a smaller database that contains only the most used units. As a result, the total size of these voices is limited to 30 MB, allowing an integration in mobile apps. ▪ HMM synthesis (CO Voices): Colibri is the name of the brand new Speech Synthesizer of Acapela. This statistical parametric speech synthesis system is based on hidden Markov models (HMMs). In this synthesizer, context-dependent HMMs are trained from databases of natural speech, allowing to generate speech waveforms from the HMMs themselves based on the maximum likelihood criterion. This system offers the ability to model different styles without requiring the recording of very large databases. Even if the overall quality is not at the same level as the Unit Selection voices are, Colibri voices are much smaller (less than 1 MB), have a better intelligibility and the advantage of being consistent in quality. Due to his parametrical aspect, Colibri voices could be manipulated with more flexibility than Unit Selection voices, including pitch and diphone duration modification.

3.Bundling your app

3.1.License

- Import the acattsioslicense.h and acattsioslicense.m in your project - Include the license header file in your sourcecode #import "acattsioslicense.h" - Use the acattsioslicense memebers when you load a voice AcapelaSpeech *acaTTS; [acaTTS loadVoice:@”voice” license:license userid:userid password:password mode:mode]; There are two types of Acapela license : Evaluation licence (the one you will receive by default) and commercial licenses (that you will use to deploy your application). With an evaluation license, an evaluation message will be played randomly. A commercial license will remove this limitation. Request this commercial license to your Sales Manager after Purchase Order Agreement.

3.2.Voices

The voice(s) folder must be included in your project and will be included in the bundle of the application Right-click on your project in xcode, then "Add" > "Existing Files…" then select voice folder(s) (e.g. hq-lfUSEnglish-Ryan-22khz).

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 9/38

You need to select "Create Folder References for any added folder" to be sure voice folder structure will be kept.

3.3.Libraries

The library libacattsios.a is only provided in emulator version with the SDK The device and universal version (device/simulator) will be sent after the commercial agreement has been made with the Sales Manager in charge of your project.

3.4.Special files

In order to use these libraries and get reed of Acapela librairies compilation errors you need ⁃

To add the force_cpp.h and force_cpp.cpp to your application



or link libstdc++ to your project

or name your objective c file .mm

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 10/38

4.Implementation

4.1.AcapelaSpeech class This reference guide intends to provide a complete reference of the functionality offered in the Acapela for iPhone libraries through API. For each function described, the following details are supplied: 1.Synopsis: Prototype in Objective-C language of the function. 2.Description: Describes the functionality provided by the function. 3.Parameters: Gives details on the parameters to be specified to the function together with the default value if any. 4.Return value: A return value is often used to control the result provided by the function. 5.Example: If needed, a short example shows how to use the function. 4.2.Reference list Initialization - (int)loadVoice:(NSString *)voice license:(NSString *)license userid:(NSInteger)userid password:(NSInteger)password mode:(NSString *)mode;

Speaking - (BOOL)startSpeakingString:(NSString *)string; - (BOOL)startSpeakingString:(NSString *)string toURL:(NSURL *)url; - (BOOL)startSpeakingStringSync:(NSString *)string toURL:(NSURL *)url; - (BOOL)queueSpeakingString:(NSString *)string; - (BOOL)queueSpeakingString:(NSString *)string textIndex(int *)index; - (BOOL)generateAudioFile:(NSString *)string toURL:(NSURL *)url type:(NSString *)type sync:(BOOL) sync;

Stop & pausing speech - (void)stopSpeaking; - (void)stopSpeakingAtBoundary:(AcapelaSpeechBoundary)boundary; - (void)pauseSpeakingAtBoundary:(AcapelaSpeechBoundary)boundary; - (void)continueSpeaking; - (void)skipToNextText; Settings - (NSString *)voice; - (BOOL)setVoice:(NSString *)voice license:(NSString *)license userid:(NSInteger)userid password:(NSInteger)password mode:(NSString *)mode; - (float)rate; - (void)setRate:(float)rate; - (float)volume;

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 11/38

- (void)setVolume:(float)volume; - (int)selbreak; - (void)setSelbreak:(int)selbreak; - (int)voiceShaping; - (void)setVoiceShping:(int)voiceShaping; - (id)objectForProperty:(NSString *)property error:(NSError **)outError; - (id)delegate; - (void)setDelegate:(id)anObject; - (int)audioboost; - (void)setAudioboost:(int)audioboost; - (Float32) audioPeakLevel; - (Float32) audioAverageLevel;

Speaking Status - (BOOL)isSpeaking; - (BOOL)isPaused; + (BOOL)isAnyApplicationSpeaking; Enumerating voices + (NSArray *)availableVoices; + (NSDictionary *)attributesForVoice:(NSString*)voice; - (NSDictionary *)attributesForCurrentVoice; + (void)setVoicesDirectoryArray:(NSArray *)anArray; + (void)refreshVoiceList; Delegate methods - (void)speechSynthesizer:(AcapelaSpeech *)sender didFinishSpeaking:(BOOL)finishedSpeaking; - (void)speechSynthesizer:(AcapelaSpeech *)sender didFinishSpeaking:(BOOL)finishedSpeaking textIndex:(int)index; - (void)speechSynthesizer:(AcapelaSpeech *)sender willSpeakWord:(NSRange)characterRange ofString:(NSString *)string; - (void)speechSynthesizer:(AcapelaSpeech *)sender willSpeakViseme:(short)visemeCode

Userdico methods - (NSStringEncoding)convertDicoToUTF8:(NSString *)userDicoTitle apply:(BOOL)apply; - (void)setUserDicoPath:(NSString *)path; - (NSArray *)getUserDicosTitles; - (BOOL)addUserDico:(NSString *)userDicoTitle relativePath:(NSString *)path; - (BOOL)removeUserDico:(NSString *)userDicoTitle; - (BOOL)setUserDicoEntry:(NSString *)userDicoTitle word:(NSString *)word nature:(NSString *)nature transcription:(NSString *)transcription; - (BOOL)removeUserDicoEntry:(NSString *)userDicoTitle word:(NSString *)word; - (NSString *)getUserDicoPath:(NSString *)userDicoTitle; - (NSArray *)checkUserDicoContent:(NSString *)userDicoTitle; - (NSDictionary *)listUserDicoContent:(NSString *)userDicoTitle;

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 12/38

- (NSArray *)getPhonemsList; - (NSArray *)isPhoneticEntryValid:(NSString *)phoneticEntry;

4.3.Instance methods

4.3.1.loadVoice

- (int)loadVoice:(NSString *)voice license:(NSString *)license userid:(NSInteger)userid password:(NSInteger)password mode:(NSString *)mode ;

Initializes the receiver with a voice and a license (provided in acattsioslicense.h/.m) - voiceIdentifier Identifier(s) of the voice(s) (max 2 to vce text tag switch) to set. A single voice identifier (from enumeration function availableVoices)

[acaTTS loadVoice:@"voice1" … Two voices identifiers to switch between voices using vce text tag

[acaTTS loadVoice:@"voice1,voice2" … - license/userid/password from acattsioslicense.m file - mode: optional load voice mode (can be set to empty string) The mode available for a voice can be found in the .ini file of the voice (e.g. non_emilie_22k_ns.qvcu.ini) For example prep_full : use the full text preprocessor for a better acronym/date/hour … detection prep_hd : use the hd text preprocessor for better voice reactivity prep_unplugged : deactivate text preprocessor (acronym/date … will be read as it) no_smiley : deactivate smileys supports (eg #CRY# will be read literally)

4.3.2.startSpeakingString

- (BOOL)startSpeakingString:(NSString *)string;

Begins speaking string through the system’s default sound output device. If the receiver is currently speaking synthesized speech when startSpeakingString: is called, that process is stopped before string is spoken. This function is asynchronous, when synthesis of string finishes normally or is stopped, the message speechSynthesizer:didFinishSpeaking: is sent to the delegate.

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 13/38

UI Design advice: This function is asynchronous. However, it may take a small amount of process time, in order to create the synthesis thread. Your application should avoid a second call to StartSpeakingString until the function returns, in order to receive correctly in time all the didFinishSpeaking event in the delegate. Parameters string

Text to synthesize. When nil or empty, no synthesis occurs.

Returns YES when synthesis starts successfully, NO otherwise.

4.3.3.startSpeakingString:toURL

- (BOOL)startSpeakingString:(NSString *)string toURL:(NSURL *)url;

Begins synthesizing string into a sound (AIFF) file. This function is asynchronous, when synthesis of string finishes normally or is stopped, the message speechSynthesizer:didFinishSpeaking: is sent to the delegate. Parameters string url

text to synthesize. When nil or empty, no synthesis occurs. filesystem location of the output sound file (must be writeable)

Returns YES when synthesis starts successfully, NO otherwise.

4.3.4.startSpeakingStringSync:toURL

- (BOOL)startSpeakingStringSync:(NSString *)string toURL:(NSURL *)url;

Begins synthesizing string into a sound (AIFF) file in synchronous mode. The function exits only when the file is generated. When synthesis of string finishes normally or is stopped, the message speechSynthesizer:didFinishSpeaking: is sent to the delegate. Parameters string

Text to synthesize. When nil or empty, no synthesis occurs.

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

url

Page : 14/38

File system location of the output sound file.

Returns YES when synthesis starts successfully, NO otherwise.

4.3.5.queueSpeakingString

- (BOOL)queueSpeakingString:(NSString *)string;

Begins speaking synthesized string through the system’s default sound output device, after the synthesizer finished to speak the currently synthesized texts. When synthesis of string finishes normally or is stopped, the messages speechSynthesizer:didFinishSpeaking and speechSynthesizer:didFinishSpeaking:textIndex: is sent to the delegate. Parameters String

text to synthesize. When nil or empty, no synthesis occurs.

Returns YES when synthesis starts successfully, NO otherwise.

4.3.6.queueSpeakingString:textindex

- (BOOL)queueSpeakingString:(NSString *)string textIndex(int *)index;

Begins speaking synthesized string through the system’s default sound output device, after the synthesizer finished to speak the currently synthesized texts. When synthesis of string finishes normally, is skipped or stopped, the messages speechSynthesizer:didFinishSpeaking and speechSynthesizer:didFinishSpeaking:textIndex: is sent to the delegate. Parameters string index

text to synthesize. When nil or empty, no synthesis occurs. index of the text in the list of texts sent to the TTS

Returns YES when synthesis starts successfully, NO otherwise.

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 15/38

4.3.7.stopSpeaking

- (void)stopSpeaking;

If the receiver is currently generating speech, synthesis speechSynthesizer:didFinishSpeaking: is sent to the delegate.

is

halted,

and

the

message

is

halted,

and

the

message

4.3.8.stopSpeakingAtBoundary

- (void)stopSpeakingAtBoundary:(AcapelaSpeechBoundary)boundary;

If the receiver is currently generating speech, synthesis speechSynthesizer:didFinishSpeaking: is sent to the delegate.

Parameters Boundary : boundary at which to pause speech. See "AcapelaSpeechBoundary".

4.3.9.pauseSpeakingAtBoundary

- (void)pauseSpeakingAtBoundary:(AcapelaSpeechBoundary)boundary;

If the receiver is currently generating speech, synthesis is paused. See continueSpeaking: to resume speech. Not applicable to startSpeakingString:toURL function

Parameters Boundary : boundary at which to pause speech. See "AcapelaSpeechBoundary".

4.3.10.continueSpeaking

- (void)continueSpeaking;

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

If the receiver is currently in pause, it resumes the speech. Not applicable to startSpeakingString:toURL function

4.3.11.skipToNextText

- (void)skipToNextText;

If several texts are queued, it stops the current text playing and skip to the next one. If there is no more queued text it stops.

4.3.12.voice - (NSString*)voice;

Returns The name of the receiver’s current voice.

4.3.13.setVoice - (BOOL)setVoice:(NSString*)voice license:(NSString *)license userid:(NSInteger)userid password:(NSInteger)password mode:(NSString *)mode;

Sets the receiver’s current voice.

Parameters voiceIdentifier Identifier of the voice to set as the current voice. Returns YES when the voice is set successfully, NO otherwise.

4.3.14.rate - (float)rate;

Provides the receiver’s speaking rate.

Acapela Group www.acapela-group.com

Page : 16/38

Acapela TTS for iOS Documentation

Returns Speaking rate (words per minute).

4.3.15.setRate - (void)setRate:(float)rate;

Specifies the receiver’s speaking rate. Parameters rate : words to speak per minute (50 to 700 words per minute)

4.3.16.volume - (float)volume;

Provides the receiver’s speaking volume. Returns Speaking volume

Acapela Group www.acapela-group.com

Page : 17/38

Acapela TTS for iOS Documentation

Page : 18/38

4.3.17.setVolume - (void)setVolume:(float)volume;

Specifies the receiver’s speaking volume. Parameters Volume

Sounds level to use for Speech.

From 15 to 200 (maximum to avoid sound saturation). 0 to mute the TTS.

4.3.18.selbreak - (int)selbreak;

Provides the receiver’s selbreak value. By default the synthesizer first calculates the speech until the first pauses found, a pause being generated by a punctuation sign. If you are synthesizing long sentences without punctuation marks, it can lead to delays before the speech starts. This parameter allows you to force the calculation on smaller text chunks. This parameter, if greater than 0, defines the size of the smallest breaking point allowed (in phonemes). The value should be comprised between 1 and 9, or 0 for normal behaviour. Returns Selbreak value: 0 (for normal behaviour), 1 to 9 (for custom values).

4.3.19.setSelbreak - (void)setSelbreak:(int)selbreak;

Specifies the receiver’s selbreak value. See selbreak function

Parameters Rate Sounds level to use for Speech.

4.3.20.voiceShaping - (int)voiceShaping;

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 19/38

Provides the voice's shaping value. Returns Shaping value from 70 to 140.

4.3.21.setVoiceShaping - (void)setVoiceShaping:(int)voiceShaping;

Specifies the voice shaping value. See voiceShaping function. Parameters voiceShaping : Shaping value from 70 to 140.

4.3.22.audioboost - (int)audioboost;

Provides the voice's audioboost current value.

Set the audio boost (value from 0 to 90) 0 (no emphasis) is the neutral and default value. The Audio Boost has effect on 2 aspects of the speech, it improves the speech clarity by emphasizing medium and high frequencies, that are important for intelligibility, and it increases the perceived level of the speech with no saturation effect.

Returns audioboots value from 0 to 90.

4.3.23.setAudioboost - (void)setAudioboost:(int)audioboost;

Specifies the voice audioboost value. See audioboost function.

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Parameters audioboots: value from 0 to 90.

4.3.24.objectForProperty:error - (id)objectForProperty:(NSString*)property error:(NSError**)outError ;

Provides the value of a receiver’s property. Parameters property outError

property to get. on output, error that occured while obtaining the value of property.

Returns The value of property

4.3.25.delegate - (id)delegate;

Returns the receiver’s delegate. Returns The receiver’s delegate.

4.3.26.setDelegate - (void)setDelegate:(id)object;

Specifies the receiver’s delegate (self). See also delegate function. Parameters object

object to be receiver's delegate.

4.3.27.isSpeaking - (id)isSpeaking; Indicates whether the receiver is currently generating synthesized speech. Returns

Acapela Group www.acapela-group.com

Page : 20/38

Acapela TTS for iOS Documentation

Page : 21/38

yes when the receiver is generating synthesized speech, no otherwise. 4.3.28.isPaused - (id)isPaused; Indicates whether the receiver is currently paused during a synthesized speech. Returns yes when the receiver is paused, no otherwise.

4.3.29.audioPeakLevel - (Float32) audioPeakLevel;

Returns the audio peak level form the TTS audio queue

4.3.30.audioAverageLevel - (Float32) audioAverageLevel; Returns the audio average level form the TTS audio queue

2.1.1.generateAudioFile - (BOOL)generateAudioFile:(NSString *)string toURL:(NSURL *)url type:(NSString *)type sync:(BOOL) sync;

Parameters string : text to synthesize. When nil or empty, no synthesis occurs. url : filesystem location of the output sound file type : "aiff" or "pcm" sync : true to exit only when generation is done, false returns immediately Returns YES when synthesis starts successfully, NO otherwise.

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 22/38

3.Class methods

3.1.1.isAnyApplicationSpeaking + (id)isAnyApplicationSpeaking;

Indicates whether if this application is currently speaking through the sound output device. Returns yes when the receiver is generating synthesized speech, no otherwise.

3.1.2.availableVoices + (NSArray*)availableVoices;

Provides the identifiers of the voices available in the application. Returns Array of strings representing the identifiers of each voice available on the system

3.1.3.attributesForVoice + (NSDictionary*)attributesForVoice:(NSString*)voice;

Provides the attribute dictionary of a voice. The keys and values of voice attribute dictionaries are described in «Constants» chapter. Parameters voice

Identifier of the voice whose attributes you want to obtain.

Returns Attribute dictionary of the voice identified by voiceIdentifier

3.1.4.attributesForCurrentVoices - (NSDictionary*)attributesForCurrentVoice;

Provides the attribute dictionary of the current voice loaded. The keys and values of voice attribute dictionaries are described in «Constants» chapter. Returns

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 23/38

Attribute dictionary of the voice identified by voiceIdentifier.

3.1.5.setVoicesDirectoryArray + (void)setVoiceDirectoryArray:(NSArray*)directories;

Set directories where the voice data files are. If not called, it is by default the path of your application. Parameters Directories : NSArray containing NSStrings representing absolute paths of directories. If nil, it is the path of your application. Example [AcapelaSpeech setVoicesDirectoryArray:[NSArray arrayWithObjects: [[[NSBundle mainBundle] bundlePath] stringByAppendingPathComponent:@"Voices"], nil]];

3.1.6.refreshVoiceList + (void)refreshVoiceList;

Relaunch the detection of the voices, inside the array of directories set by setVoicesDirectoryArray:. It is useful when you manipulate voice data files outside of your application bundle, for example /Documents/. The function is implicitly called inside availableVoices just after a setVoicesDirectoryArray:.

3.2.Delegate methods

3.2.1.speechSynthesizer:didFinishSpeaking - (void)speechSynthesizer:(AcapelaSpeech*)sender didFinishSpeaking:(BOOL)finishedSpeaking;

Sent to the delegate when an AcapelaSpeech object finishes speaking through the sound output device or to URL. Parameters sender finishedSpeaking

an AcapelaSpeech object that has stopped speaking into the sound output device. yes when speaking completed normally, no if speaking is stopped prematurely for any reason (for example after a stopSpeaking call).

3.2.2.speechSynthesizer:didFinishSpeaking:textindex

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 24/38

- (void)speechSynthesizer:(AcapelaSpeech*)sender didFinishSpeaking:(BOOL)finishedSpeaking textIndex:(int)index;

Sent to the delegate when an AcapelaSpeech object finishes speaking through the sound output device or to URL. Parameters sender finishedSpeaking textIndex

anAcapelaSpeech object that has stopped speaking into the sound output device. yeswhen all the TTS has been played and there is no TTS pending, no if there is still TTS to be played (when calling skiptonexttext for example). index of the text that has just finished

3.2.3.speechSynthesizer:willspeakword - (void)speechSynthesizer:(AcapelaSpeech*)sender willSpeakWord:(NSRange)characterRange ofString:(NSString*)string;

Sent to the delegate just before a synthesized word is spoken through the sound output device. One typical use of this method might be to visually highlight the word being spoken.

Parameters Sender characterRange String

an AcapelaSpeech object that’s synthesizing text into speech. position and length of the word that sender is about to speak into the sound output device. text that is being synthesized by sender.

3.2.4.speechSynthesizer:willspeakViseme - (void)speechSynthesizer:(AcapelaSpeech *)sender willSpeakViseme:(short)visemeCode

Sent to the delegate just before a synthesized viseme is spoken through the sound output device. One typical use of this method might be to animate a virtual mouth Parameters sender visemeCode

SVP_0 = 0 SVP_1 = 1 SVP_2 = 2 SVP_3 = 3 SVP_4 = 4

Acapela Group www.acapela-group.com

an AcapelaSpeech object that’s synthesizing text into speech. viseme code (based on the Disney viseme list)

'silence 'ae ax ah 'aa 'ao 'ey eh uh

Acapela TTS for iOS Documentation

SVP_5 = 5 SVP_6 = 6 SVP_7 = 7 SVP_8 = 8 SVP_9 = 9 SVP_10 = 10 SVP_11 = 11 SVP_12 = 12 SVP_13 = 13 SVP_14 = 14 SVP_15 = 15 SVP_16 = 16 SVP_17 = 17 SVP_18 = 18 SVP_19 = 19 SVP_20 = 20 SVP_21 = 21

Page : 25/38

'er 'y iy ih ix 'w uw 'ow 'aw 'oy 'ay 'h 'r 'l 's z 'sh ch jh zh 'th dh 'f v 'd t n 'k g ng 'p b m

4.Userdico methods User dictionaries must have .userdico has extension (see userDictionaryDocumentation.pdf) The following functions will only work if the dictionary is stored in a writable directory (documents or library application folder) and not bundled with the application : addUserDico/removeUserDico You must reload the voice to apply the changes

4.1.1.getUserDicosTitles - (NSArray *)getUserDicosTitles;

Return an array with the list of user dictionaries titles for the current voice. Title string used as parameter for the user dictionary functions

Returns Nsarray with the users dictionaries titles list.

4.1.2.addUserDico - (BOOL)addUserDico:(NSString *)userDicoTitle relativePath:(NSString *)path;

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 26/38

Add a new user dictionary for the current voice loaded. It creates the user dictionary file and add the line in the voice ini file.

Returns true if success, false for any error

4.1.3.removeUserDico - (BOOL)removeUserDico:(NSString *)userDicoTitle;

Remove the user dictionary specifief by the title (for the current voice loaded).

Returns true if success, false for any error

4.1.4.setUserDicoEntry:word:nature:transcription - (BOOL)setUserDicoEntry:(NSString *)userDicoTitle word:(NSString *)word nature:(NSString *)nature transcription:(NSString *)transcription;

Add a new entry in the user dictionary (see userDictionaryDocumentation for the rules) Returns true if success, false if the nature or transcription contain any error

4.1.5.removeUserDicoEntry:word - (BOOL)removeUserDicoEntry:(NSString *)userDicoTitle word:(NSString *)word;

Remove the entry specified by the word in the user dictionary Returns true if success, false if the nature or transcription contain any error

4.1.6.getUserDicoPath - (NSString *)getUserDicoPath:(NSString *)userDicoTitle;

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 27/38

Returns the full user dictionary file path Returns nil if not found

4.1.7.checkUserDicoContent - (NSArray *)checkUserDicoContent:(NSString *)userDicoTitle;

Returns an array with the line index containing at least one error in the transcription Returns nil if no error found

4.1.8.listUserDicoContent - (NSDictionary *)listUserDicoContent:(NSString *)userDicoTitle;

Returns a dictionary with for each keys representing the index line, a NSArray with the word-nature-transcription of the line. Returns nil if no error found

4.1.9.getPhonemsList - (NSArray *)getPhonemsList;

Returns an array with the phonemes code list for the current voice. Phonemes you can use for the transcription (see userDictionaryDocumentation for the rules). Returns nil if no error found

4.1.10.setUserDicoPath - (NSString *)setUserDicoPath:(NSString *)path;

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 28/38

Specify the location of your userdico (see chapter 7 for detailed description) Note that you need to do that before loading the voice so alloc the TTS object, set userdico path then load the voice.

acaTTS = [AcapelaSpeech alloc]; [acaTTS setUserDicoPath:userDicoDirectory]; [acaTTS initWithVoice:@"frf_antoinesad_22k_ns.bvcu" license:MyAcaLicense]; Returns nil if not found

4.1.11.isPhoneticEntryValid

- (NSArray *)isPhoneticEntryValid:(NSString *)phoneticEntry;

Check if a phonetic string contains any invalid phoneme. Returns Array with incorrect phonems

4.1.12.convertDicoToUTF8

- (NSStringEncoding)convertDicoToUTF8:(NSString *)userDicoTitle apply:(BOOL)apply;

Convert user dictionary to UTF8 encoding If apply is set to false returns the current dictionary encoding Returns Dictionary encoding or -1 if error

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 29/38

2.1.Constants

Voice attributes NSString *voiceIdentifier = [voiceAttributesDic valueForKey:AcapelaVoiceIdentifier]; NSString *voiceName = [voiceAttributesDic valueForKey:AcapelaVoiceName]; NSString *voiceAge = [voiceAttributesDic valueForKey:AcapelaVoiceAge]; NSString *voiceGender = [voiceAttributesDic valueForKey:AcapelaVoiceGender]; NSString *demoText = [voiceAttributesDic valueForKey:AcapelaVoiceDemoText]; NSString *voiceLanguage = [voiceAttributesDic valueForKey:AcapelaVoiceLanguage]; NSString *voiceLocaleIdentifier = [voiceAttributesDic valueForKey:AcapelaVoiceLocaleIdentifier]; NSString *voiceStringEncoding = [voiceAttributesDic valueForKey:AcapelaVoiceStringEncoding]; NSString *voiceDataVersion = [voiceAttributesDic valueForKey:AcapelaVoiceDataVersion]; NSString *voiceGlobalPath = [voiceAttributesDic valueForKey:AcapelaVoiceGlobalPath]; NSString *voiceRelativePath = [voiceAttributesDic valueForKey:AcapelaVoiceRelativePathToApp]; NSString *voiceFrequency = [voiceAttributesDic valueForKey:AcapelaVoiceFrequency]; NSString *voiceQuality = [voiceAttributesDic valueForKey:AcapelaVoiceQuality];

- AcapelaVoiceIdentifier A unique string identifying the voice. - AcapelaVoiceName The name of the voice suitable for display. - AcapelaVoiceAge The perceived age (in years) of the voice. - AcapelaVoiceGender The perceived gender of the voice. May be either AcapelaVoiceGenderMale, AcapelaVoiceGenderFemale, AcapelaVoiceNeuter. - AcapelaVoiceDemoText A demonstration text to speak - AcapelaVoiceLanguage Voice language - AcapelaVoiceLocaleIdentifier The locale's identifier of the voice. See "Locales Programming Guide" in ADC documentation. - AcapelaVoiceStringEncoding The NSString encoding used by the voice. See "NSString Class Reference" in ADC documentation for the constants. - AcapelaDataVersion Voice data version - AcapelaVoiceGlobalPath

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 30/38

Global path in the iPhone file system for the .ini voice file. - AcapelaVoiceRelativePathToApp Relative path in the App for the .ini voice file. - AcapelaVoiceFrequency Voice frequency - AcapelaVoiceQuality Voice quality

Voice Genders NSString *AcapelaVoiceGenderNeuter; NSString *AcapelaVoiceGenderMale; NSString *AcapelaVoiceGenderFemale; AcapelaVoiceGenderNeuter A neutral voice (neither male nor female) AcapelaVoiceGenderMale A male voice aleIdentifierAcapelaVoiceGenderFemale A female voice

Property keys These constants identify synthesizer properties. They are used with objectForProperty:error: function. For the moment, only a little selection of those properties are supported by Acapela for iPhone. Actually, the setObject:forProperty:error: function is not yet implemented.

NSString *AcapelaSpeechErrorsProperty ; // NSDictionary, see keys below NSString *AcapelaSpeechSynthesizerInfoProperty ; // NSDictionary, see keys below AcapelaSpeechErrorsProperty Retrieve errors occured during speech synthesis. The value is a NSDictionary, see keys below in "Synthesis Error". AcapelaSpeechSynthesizerInfoProperty Retrieve Speech Synthesizer information. The value is a NSDictionary, see keys below in "Synthesizer Information".

Synthesis errors These constants identify errors that may occur during speech synthesis (used with AcapelaSpeechErrorProperty). NSString *AcapelaSpeechErrorCount NSString *AcapelaSpeechErrorOldestCode NSString *AcapelaSpeechErrorOldestCharacterOffset NSString *AcapelaSpeechErrorNewestCode

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 31/38

NSString *AcapelaSpeechErrorNewestCharacterOffset NSString *AcapelaSpeechErrorOldestNLPcode NSString *AcapelaSpeechErrorOldestSYNTHcode NSString *AcapelaSpeechErrorNewestNLPcode NSString *AcapelaSpeechErrorNewestSYNTHcode AcapelaSpeechErrorCount The number of errors that have occurred in processing the current text string, since the last call to the objectForProperty:error: function with the AcapelaSpeechErrorProperty property. Using the AcapelaSpeechErrorOldest keys and the AcapelaSpeechErrorNewest keys, you can get information about the oldest and most recent errors that occurred since the last call to objectForProperty:error:, but you cannot get information about any intervening errors. The value is a NSNumber. AcapelaSpeechErrorOldestCode The position in the text string of the oldest recent error that occurred since the last call to the objectForProperty:error: function with the AcapelaSpeechErrorProperty property. The value is a NSNumber. AcapelaSpeechErrorOldestCharacterOffset The error code of the first error that occurred since the last call to the objectForProperty:error: function with the AcapelaSpeechErrorProperty property. The value is a NSNumber. AcapelaSpeechErrorNewestCode The error code of the most recent error that occurred since the last call to the objectForProperty:error: function with the AcapelaSpeechErrorProperty property. The value is a NSNumber. AcapelaSpeechErrorNewestCharacterOffset The position in the text string of the most recent error that occurred since the last call to the objectForProperty:error: function with the AcapelaSpeechErrorProperty property. The value is a NSNumber. AcapelaSpeechErrorOldestNLPCode The error code of the first error generated by the NLP module since the last call to the objectForProperty:error: function with the AcapelaSpeechErrorProperty property. The value is a NSNumber. AcapelaSpeechErrorOldestSYNTHCode The error code of the first error generated by the synthesizer since the last call to the objectForProperty:error: function with the AcapelaSpeechErrorProperty property. The value is a NSNumber.The value is a NSNumber. AcapelaSpeechErrorNewestNLPCode The error code of the last error generated by the NLP module since the last call to the objectForProperty:error: function with the AcapelaSpeechErrorProperty property. The value is a NSNumber.The value is a NSNumber. AcapelaSpeechErrorNewestSYNTHCode The error code of the last error generated by the synthesizer since the last call to the objectForProperty:error: function with the AcapelaSpeechErrorProperty property. The value is a NSNumber.The value is a NSNumber.

3.Error Codes

E_BABTTS_SELECTOR_BADVERSION = -27,// Uncompatible seletor data version E_BABTTS_DICT_BADVERSION = -26, //! The dictionary version is too old ... Use the conversion tool Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 32/38

E_BABTTS_LIBNOTINITIALIZED = -25, E_BABTTS_NOTVALIDLICENSE = -24, //!< The license key is not valid for the requested function E_BABTTS_NODICT = -23, //!< No dictionary E_BABTTS_NODBA = -22, //!< A data file is missing E_BABTTS_NOTIMPLEMENTED = -21, //!< The function is not yet implemented E_BABTTS_DICT_NOENTRY = -20, //!< The user lexicon is empty E_BABTTS_DICT_READ = -19, //!< Error reading the lexicon file E_BABTTS_DICT_WRITE = -18, //!< Error when attempting to write to the file. E_BABTTS_DICT_OPEN = -17, //!< The specified dictionary doesn't exist E_BABTTS_BADPHO = -16, //!< An incorrect phoneme was introduced E_BABTTS_FILEOPEN = -15, //!< Error when opening a file E_BABTTS_FILEWRITE = -14, //!< Error when attempting to write to a file E_BABTTS_INVALIDTAG = -13, //!< The inserted tag is invalid (obsolette) E_BABTTS_NONLP = -12, //!< The NLP object is invalid/doesn't exist E_BABTTS_THREADERROR = -11, //!< Error when attempting to start a new thread E_BABTTS_NOTVALIDPARAMETER = -10, //!< A parameter/argument is not valid E_BABTTS_NOREGISTRY = -9, //!< The required registry keys are not valid / do not exist E_BABTTS_REGISTRYERROR = -8, //!< Bad information in the registry E_BABTTS_PROCESSERROR = -7, //!< "Generic"/unhandled error E_BABTTS_WAVEOUTNOTFREE = -6, //!< Can't open the output device E_BABTTS_WAVEOUTWRITE = -5, //!< Can't write to the output device E_BABTTS_SPEAKERROR = -4, //!< Error while Speaking or processing text E_BABTTS_ISPLAYING = -3, //!< Already in play mode or currently processing text E_BABTTS_MEMFREE = -2, //!< Problem when freeing memory E_BABTTS_NOMEM = -1, //!< No memory for allocation E_BABTTS_NOERROR = 0, //!< No error W_BABTTS_NOTPROCESSED = 1, //!< The processing was not done W_BABTTS_NOTFULLYPROCESSED = 2, //!< The processing was not fully processed. Need to call the function one more time to complete the process. W_BABTTS_NOMOREDATA = 3

4.Synthesizer information

Synthesizer information

These constants identify synthesizer properties (used with AcapelaSpeechSynthesizerInfoProperty). NSString *AcapelaSpeechSynthesizerInfoIdentifier NSString *AcapelaSpeechSynthesizerInfoVersion AcapelaSpeechSynthesizerInfoIdentifier The identifier of the speech synthesizer. The value is a NSString.

AcapelaSpeechSynthesizerInfoVersion The version of the speech synthesizer. The value is a NSString.

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 33/38

Speech command delimiters These constants identify speech-command AcapelaSpeechCommandDelimiterProperty).

delimiters

in

synthesized

text

Locations that indicate where speech should be paused or stopped. pauseSpeakingAtBoundary: and stopSpeakingAtBoundary: for more information.

See

(used

with

NSString *AcapelaSpeechCommandPrefix NSString *AcapelaSpeechCommandSuffix AcapelaSpeechCommandPrefix The command delimiter string that prefixes a command. AcapelaSpeechCommandSuffix The command delimiter string that suffixes a command.

AcapelaSpeechBoundary

enum { AcapelaSpeechImmediateBoundary = 0, AcapelaSpeechWordBoundary, AcapelaSpeechSentenceBoundary }; typedef NSUInteger AcapelaSpeechBoundary; AcapelaSpeechImmediateBoundary Speech should be paused or stopped immediately. AcapelaSpeechWordBoundary Speech should be paused or stopped at the end of the word. AcapelaSpeechSentenceBoundary Speech should be paused or stopped at the end of the sentence.

5.User dictionary

Runtime creation

If you bundled a voice in your application all will be in a read only Here is a way to allow user dictionary manipulation - Create or set your user dictionary in a writable folder (documents or library for example)

Acapela Group www.acapela-group.com

the

functions

Acapela TTS for iOS Documentation

Page : 34/38

- Set in the .ini voice(s) file(s) bundled with your application a user dictionary with only a name or a path that will be used in addition to the path you will set with setUserDicoPath - In your application call setUserDicoPath with the writable folder path

By doing that the TTS will look for your user dictionary (the one set in the .ini file) in the path you set with

setUserDicoPath

For example if you set this in the .ini file of the voice LDI

"ryan/default.userdico"

USERDICO

Then call setUserDicoPath with the document folder (for example). User dictionary will supposed to be in the documentfolder/ryan/default.userdico

- On the startup of your application create the user dictionary (mandatory otherwise the voice won't load) acaTTS = [AcapelaSpeech alloc];

// Create the default UserDico for the voice delivered in the bundle // We include its name in the .ini file for example “ryan/default.userdico" NSError * error; // Get the application Documents folder NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; // Creates ryan folder if it doesn't exist already NSString * dirDicoPath = [documentsDirectory stringByAppendingString:[NSString stringWithFormat:@"/ryan"]]; [[NSFileManager defaultManager] createDirectoryAtPath:dirDicoPath withIntermediateDirectories: YES attributes:nil error: &error]; NSString * fullDicoPath = [documentsDirectory stringByAppendingString:[NSString stringWithFormat:@"/ryan/default.userdico"]]; // Check the file doesn't already exists to avoid to erase its content if (![[NSFileManager defaultManager] fileExistsAtPath: fullDicoPath]) { // Create the file if (![@"UserDico\n" encoding:NSISOLatin1StringEncoding error:&error]) {

Acapela Group www.acapela-group.com

writeToFile:fullDicoPath

atomically:YES

Acapela TTS for iOS Documentation

Page : 35/38

NSLog(@"%@",error); return; } } //Set the userdico path as being the documents folder [acaTTS setUserDicoPath:documentsDirectory];

Snippets // Get user dico content NSDictionary * userDicontentList = [acaTTS listUserDicoContent:userDicoTitle]; NSLog(@"********** userDicontentList ********** "); for (int i = 0; i < [userDicontentList count];i++) { NSArray * line = [userDicontentList objectForKey:[NSString stringWithFormat:@"%d",i]] ; NSLog(@"%@\t%@\t%@",[line objectAtIndex:0],[line objectAtIndex:1],[line objectAtIndex:2]); } NSArray * userDicoEntries = [acaTTS checkUserDicoContent:userDicoTitle]; NSLog(@"********** checkUserDico ********** "); for (int i = 0; i < [userDicoEntries count];i++) { NSLog(@"Userdico error in line : %@",[userDicoEntries objectAtIndex:i]); NSArray * entryArray = [userDicontentList objectForKey:[userDicoEntries objectAtIndex:i]] ; NSArray * result = [acaTTS isPhoneticEntryValid:[entryArray objectAtIndex:2]]; for (int i = 0; i < [result count];i++) { NSLog(@"Error on phonem : %@",[result objectAtIndex:i]); } }

6.Text tags The backslash character is not allowed within a tag. Tags are case-insensitive. When the engine encounters a tag it does not understand, the tag is simply ignored (not read).

Pause \Pau=number\ Pause in milliseconds. The limit is 5000 milliseconds. Phonetics entries \Prn=h @ l @U \ Phonetic string in SAMPA (see language docs for the phonetic tables) Example: \Prn=h @ l @U \ . (hello)

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 36/38

\Prn=t_h i1 n EI1 dZ r= z \ . (teenagers) \Prn=@U1 v r= A l \ . (overall) Relative \RSPD=100\ Relative speed in percent: 30 % …. 400% Voice Shaping \VCT=100\ Voice Shaping in percent: 70 % …. 140% Reset tags \RST\ Reset all voices tags (apart RmS, RmW tags) Spelling \RmS=value\ Spelling. 1 = activated; 0 = desactivated Word By Word \RmW=value\ Word by Word. 1 = activated; 0 = desactivated Once activated you have to set them to 0 to deactivate them Voice switch \vce=speaker=voiceIdentifier\ Specify with which voice the text following the tag must be played Voice identifer is one of the identifier used with initWithVoice

[acaTTS initWithVoice:@"voice1,voice2" license:acaLicense]; [acaTTS startSpeakingString:@"\\vce=speaker=voice1\\ Welcome \\vce=speaker=voice2\\ Wilkommen");

7.Audio Management

Audio Management

The TTSDemo illustrates how to implement the following audio cases.

Audio Background

With the iOS 4.x application can now run in background. To let TTS play when app is in background you must include the UIBackgroundModes key in its Info.plist file. Its value is an array that contains one or more strings with the following values: audio - The application plays audible content to the user while in the background.

Then use the following code

// Allow audio to keep playing when app is in background AVAudioSession *audioSession = [AVAudioSession sharedInstance];

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 37/38

NSError *setCategoryError = nil; [audioSession setCategory:AVAudioSessionCategoryPlayback error:&setCategoryError]; NSError *activationError = nil; [audioSession setActive:YES error:&activationError];

Phone – Alarm Events

//On phoneCall/alarm pause the TTS and set TTS unactive - (void)beginInterruption { [MyAcaTTS pauseSpeakingAtBoundary:AcapelaSpeechImmediateBoundary]; [MyAcaTTS pauseSpeakingAtBoundary:AcapelaSpeechImmediateBoundary]; [MyAcaTTS setActive:NO]; } //When done set TTS active and resume the TTS - (void)endInterruption { [MyAcaTTS setActive:YES]; [MyAcaTTS continueSpeaking]; [MyAcaTTS setActive:YES]; [MyAcaTTS continueSpeaking]; } Interruptions Please refer to the use of the function void MyInterruptionListener(void *inClientData, UInt32 inInterruptionState) inside the TTSDemo project to see how new functions must be implemented. First, you need to initialize an AudioSession in your app when it is loading, for example in viewDidLoad AudioSessionInitialize( NULL, NULL, MyInterruptionListener, &MyAcaTTS); then, implement the MyInterruptionListener function. At the beginning of an interruption, you need to deactivate your AcapelaSpeech instance THEN deactivate your AudioSession. And at the end of an interruption, it is the opposite : you need first to reactivate your AudioSession and THEN your AcapelaSpeech instance. if (inInterruptionState == kAudioSessionBeginInterruption) { [anAcapelaSpeech setActive:NO]; status = AudioSessionSetActive(NO); } if (inInterruptionState == kAudioSessionEndInterruption) { status = AudioSessionSetActive(YES); [anAcapelaSpeech setActive:YES]; }

Music Player If the music player is playing when you launch a TTS speak you can decide to let the music plays in the same time or to pause the music player and resume it when TTS is done .

Acapela Group www.acapela-group.com

Acapela TTS for iOS Documentation

Page : 38/38

Use the following code let the music player continue when TTS is speaking

UInt32 sessionCategory = kAudioSessionCategory_AmbientSound; AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,sizeof(sessionCategory),&sessionC ategory);

Use the following code to manage the music player (works only on device) In your .h

#import #define PLAYER_TYPE_PREF_KEY @"player_type_preference" MPMusicPlayerController *setMusicPlayer;

In your .m Pause the music player before starting the TTS //Get the state of the music player so we can start it back up if necessary MPMusicPlaybackState playbackState = [setMusicPlayer playbackState]; if (playbackState == MPMusicPlaybackStatePlaying) { musicstate = YES; [setMusicPlayer pause]; } else { //Need this to renable speech with music player active AudioSessionSetActive(YES); [acaTTS setActive:YES]; Resume the player when TTS is done //Check the musicstate if it was playing, start it again if (musicstate) { [setMusicPlayer play]; }

Acapela Group www.acapela-group.com