Skip to content

Configuration

Bases: BaseSettings

Configuration for the application. Values can be set via environment variables.

Pydantic will automatically handle mapping uppercased environment variables to the corresponding fields. To populate nested, the environment should be prefixed with the nested field name and an underscore. For example, the environment variable LOG_LEVEL will be mapped to log_level, WHISPER__MODEL(note the double underscore) to whisper.model, to set quantization to int8, use WHISPER__COMPUTE_TYPE=int8, etc.

Source code in src/speaches/config.py
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
class Config(BaseSettings):
    """Configuration for the application. Values can be set via environment variables.

    Pydantic will automatically handle mapping uppercased environment variables to the corresponding fields.
    To populate nested, the environment should be prefixed with the nested field name and an underscore. For example,
    the environment variable `LOG_LEVEL` will be mapped to `log_level`, `WHISPER__MODEL`(note the double underscore) to `whisper.model`, to set quantization to int8, use `WHISPER__COMPUTE_TYPE=int8`, etc.
    """  # noqa: E501

    model_config = SettingsConfigDict(env_nested_delimiter="__")

    api_key: str | None = None
    """
    If set, the API key will be required for all requests.
    """
    log_level: str = "debug"
    """
    Logging level. One of: 'debug', 'info', 'warning', 'error', 'critical'.
    """
    host: str = Field(alias="UVICORN_HOST", default="0.0.0.0")
    port: int = Field(alias="UVICORN_PORT", default=8000)
    allow_origins: list[str] | None = None
    """
    https://docs.pydantic.dev/latest/concepts/pydantic_settings/#parsing-environment-variable-values
    Usage:
        `export ALLOW_ORIGINS='["http://localhost:3000", "http://localhost:3001"]'`
        `export ALLOW_ORIGINS='["*"]'`
    """

    enable_ui: bool = True
    """
    Whether to enable the Gradio UI. You may want to disable this if you want to minimize the dependencies and slightly improve the startup time.
    """  # noqa: E501

    default_language: Language | None = None
    """
    Default language to use for transcription. If not set, the language will be detected automatically.
    It is recommended to set this as it will improve the performance.
    """
    default_response_format: ResponseFormat = ResponseFormat.JSON
    whisper: WhisperConfig = WhisperConfig()
    max_no_data_seconds: float = 1.0
    """
    Max duration to wait for the next audio chunk before transcription is finilized and connection is closed.
    Used only for live transcription (WS /v1/audio/transcriptions).
    """
    min_duration: float = 1.0
    """
    Minimum duration of an audio chunk that will be transcribed.
    Used only for live transcription (WS /v1/audio/transcriptions).
    """
    word_timestamp_error_margin: float = 0.2
    """
    Used only for live transcription (WS /v1/audio/transcriptions).
    """
    max_inactivity_seconds: float = 2.5
    """
    Max allowed audio duration without any speech being detected before transcription is finilized and connection is closed.
    Used only for live transcription (WS /v1/audio/transcriptions).
    """  # noqa: E501
    inactivity_window_seconds: float = 5.0
    """
    Controls how many latest seconds of audio are being passed through VAD. Should be greater than `max_inactivity_seconds`.
    Used only for live transcription (WS /v1/audio/transcriptions).
    """  # noqa: E501

    # NOTE: options below are not used yet and should be ignored. Added as a placeholder for future features I'm currently working on.  # noqa: E501

    chat_completion_base_url: str = "https://api.openai.com/v1"
    chat_completion_api_key: str | None = None

    speech_base_url: str | None = None
    speech_api_key: str | None = None
    speech_model: str = "piper"
    speech_extra_body: dict = {"sample_rate": 24000}

    transcription_base_url: str | None = None
    transcription_api_key: str | None = None

api_key

api_key: str | None = None

If set, the API key will be required for all requests.

log_level

log_level: str = 'debug'

Logging level. One of: 'debug', 'info', 'warning', 'error', 'critical'.

host

host: str = Field(alias='UVICORN_HOST', default='0.0.0.0')

port

port: int = Field(alias='UVICORN_PORT', default=8000)

allow_origins

allow_origins: list[str] | None = None

https://docs.pydantic.dev/latest/concepts/pydantic_settings/#parsing-environment-variable-values Usage: export ALLOW_ORIGINS='["http://localhost:3000", "http://localhost:3001"]' export ALLOW_ORIGINS='["*"]'

enable_ui

enable_ui: bool = True

Whether to enable the Gradio UI. You may want to disable this if you want to minimize the dependencies and slightly improve the startup time.

default_language

default_language: Language | None = None

Default language to use for transcription. If not set, the language will be detected automatically. It is recommended to set this as it will improve the performance.

default_response_format

default_response_format: ResponseFormat = JSON

whisper

max_no_data_seconds

max_no_data_seconds: float = 1.0

Max duration to wait for the next audio chunk before transcription is finilized and connection is closed. Used only for live transcription (WS /v1/audio/transcriptions).

min_duration

min_duration: float = 1.0

Minimum duration of an audio chunk that will be transcribed. Used only for live transcription (WS /v1/audio/transcriptions).

word_timestamp_error_margin

word_timestamp_error_margin: float = 0.2

Used only for live transcription (WS /v1/audio/transcriptions).

max_inactivity_seconds

max_inactivity_seconds: float = 2.5

Max allowed audio duration without any speech being detected before transcription is finilized and connection is closed. Used only for live transcription (WS /v1/audio/transcriptions).

inactivity_window_seconds

inactivity_window_seconds: float = 5.0

Controls how many latest seconds of audio are being passed through VAD. Should be greater than max_inactivity_seconds. Used only for live transcription (WS /v1/audio/transcriptions).

Bases: BaseModel

See https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/transcribe.py#L599.

Source code in src/speaches/config.py
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
class WhisperConfig(BaseModel):
    """See https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/transcribe.py#L599."""

    model: str = Field(default="Systran/faster-whisper-small")
    """
    Default HuggingFace model to use for transcription. Note, the model must support being ran using CTranslate2.
    This model will be used if no model is specified in the request.

    Models created by authors of `faster-whisper` can be found at https://huggingface.co/Systran
    You can find other supported models at https://huggingface.co/models?p=2&sort=trending&search=ctranslate2 and https://huggingface.co/models?sort=trending&search=ct2
    """
    inference_device: Device = Field(default=Device.AUTO)
    device_index: int | list[int] = 0
    compute_type: Quantization = Field(default=Quantization.DEFAULT)
    cpu_threads: int = 0
    num_workers: int = 1
    ttl: int = Field(default=300, ge=-1)
    """
    Time in seconds until the model is unloaded if it is not being used.
    -1: Never unload the model.
    0: Unload the model immediately after usage.
    """
    use_batched_mode: bool = False
    """
    Whether to use batch mode(introduced in 1.1.0 `faster-whisper` release) for inference. This will likely become the default in the future and the configuration option will be removed.
    """  # noqa: E501

model: str = Field(default='Systran/faster-whisper-small') class-attribute instance-attribute

Default HuggingFace model to use for transcription. Note, the model must support being ran using CTranslate2. This model will be used if no model is specified in the request.

Models created by authors of faster-whisper can be found at https://huggingface.co/Systran You can find other supported models at https://huggingface.co/models?p=2&sort=trending&search=ctranslate2 and https://huggingface.co/models?sort=trending&search=ct2

ttl: int = Field(default=300, ge=-1) class-attribute instance-attribute

Time in seconds until the model is unloaded if it is not being used. -1: Never unload the model. 0: Unload the model immediately after usage.

use_batched_mode: bool = False class-attribute instance-attribute

Whether to use batch mode(introduced in 1.1.0 faster-whisper release) for inference. This will likely become the default in the future and the configuration option will be removed.