# Create a trained model **PUT /_ml/trained_models/{model_id}** Enable you to supply a trained model that is not created by data frame analytics. ## Required authorization * Cluster privileges: `manage_ml` ## Servers - http://api.example.com: http://api.example.com () ## Authentication methods - Api key auth ## Parameters ### Path parameters - **model_id** (string) The unique identifier of the trained model. ### Query parameters - **defer_definition_decompression** (boolean) If set to `true` and a `compressed_definition` is provided, the request defers definition decompression and skips relevant validations. - **wait_for_completion** (boolean) Whether to wait for all child operations (e.g. model download) to complete. ### Body: application/json (object) - **compressed_definition** (string) The compressed (GZipped and Base64 encoded) inference definition of the model. If compressed_definition is specified, then definition cannot be specified. - **definition** (object) The inference definition for the model. If definition is specified, then compressed_definition cannot be specified. - **description** (string) A human-readable description of the inference trained model. - **inference_config** (object) The default configuration for inference. This can be either a regression or classification configuration. It must match the underlying definition.trained_model's target_type. For pre-packaged models such as ELSER the config is not required. - **input** (object) The input field names for the model definition. - **metadata** (object) An object map that contains metadata about the model. - **model_type** (string) The model type. Supported values include: - `tree_ensemble`: The model definition is an ensemble model of decision trees. - `lang_ident`: A special type reserved for language identification models. - `pytorch`: The stored definition is a PyTorch (specifically a TorchScript) model. Currently only NLP models are supported. - **model_size_bytes** (number) The estimated memory usage in bytes to keep the trained model in memory. This property is supported only if defer_definition_decompression is true or the model definition is not supplied. - **platform_architecture** (string) The platform architecture (if applicable) of the trained mode. If the model only works on one platform, because it is heavily optimized for a particular processor architecture and OS combination, then this field specifies which. The format of the string must match the platform identifiers used by Elasticsearch, so one of, `linux-x86_64`, `linux-aarch64`, `darwin-x86_64`, `darwin-aarch64`, or `windows-x86_64`. For portable models (those that work independent of processor architecture or OS features), leave this field unset. - **tags** (array[string]) An array of tags to organize the model. - **prefix_strings** (object) Optional prefix strings applied at inference ## Responses ### 200 #### Body: application/json (object) - **model_id** (string) Identifier for the trained model. - **model_type** (string) The model type Supported values include: - `tree_ensemble`: The model definition is an ensemble model of decision trees. - `lang_ident`: A special type reserved for language identification models. - `pytorch`: The stored definition is a PyTorch (specifically a TorchScript) model. Currently only NLP models are supported. - **tags** (array[string]) A comma delimited string of tags. A trained model can have many tags, or none. - **version** (string) The Elasticsearch version number in which the trained model was created. - **compressed_definition** (string) - **created_by** (string) Information on the creator of the trained model. - **create_time** (string | number) The time when the trained model was created. - **default_field_map** (object) Any field map described in the inference configuration takes precedence. - **description** (string) The free-text description of the trained model. - **estimated_heap_memory_usage_bytes** (number) The estimated heap usage in bytes to keep the trained model in memory. - **estimated_operations** (number) The estimated number of operations to use the trained model. - **fully_defined** (boolean) True if the full model definition is present. - **inference_config** (object) The default configuration for inference. This can be either a regression, classification, or one of the many NLP focused configurations. It must match the underlying definition.trained_model's target_type. For pre-packaged models such as ELSER the config is not required. - **input** (object) The input field names for the model definition. - **license_level** (string) The license level of the trained model. - **metadata** (object) An object containing metadata about the trained model. For example, models created by data frame analytics contain analysis_config and input objects. - **model_size_bytes** (number | string) - **model_package** (object) - **location** (object) - **platform_architecture** (string) - **prefix_strings** (object) [Powered by Bump.sh](https://bump.sh)