Loading

Set up and configure semantic_text fields

Serverless Stack 9.0.0

This page provides instructions for setting up and configuring semantic_text fields. Learn how to configure inference endpoints, including default and preconfigured options, ELSER on EIS, custom endpoints, and dedicated endpoints for ingestion and search operations.

You can configure inference endpoints for semantic_text fields in the following ways:

Note

If you use a custom inference endpoint through your ML node and not through Elastic Inference Service (EIS), the recommended method is to use dedicated endpoints for ingestion and search.

Stack 9.1.0 If you use EIS, you don't have to set up dedicated endpoints.

This section shows you how to set up semantic_text with different default and preconfigured endpoints.

Serverless

To use the default .elser-v2-elastic endpoint that runs on EIS, you can set up semantic_text with the following API request:

				PUT my-index-000001
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text"
      }
    }
  }
}
		

If you don't specify an inference endpoint, the inference_id field defaults to .elser-v2-elastic, a preconfigured endpoint for the elasticsearch service.

Stack 9.2.0 Self-Managed Unavailable

To use the preconfigured .elser-v2-elastic endpoint that runs on EIS, you can set up semantic_text with the following API request:

				PUT my-index-000001
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text",
        "inference_id": ".elser-v2-elastic"
      }
    }
  }
}
		

If you don't specify an inference endpoint, the inference_id field defaults to .elser-2-elasticsearch, a preconfigured endpoint for the elasticsearch service.

If you use the default .elser-2-elasticsearch endpoint, you can set up semantic_text with the following API request:

				PUT my-index-000001
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text"
      }
    }
  }
}
		

Serverless Stack GA 9.2.0 Self-Managed Unavailable

If you use the preconfigured .elser-2-elastic endpoint that utilizes the ELSER model as a service through the Elastic Inference Service (ELSER on EIS), you can set up semantic_text with the following API request:

Serverless

				PUT my-index-000001
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text"
      }
    }
  }
}
		

Stack 9.2.0 Self-Managed Unavailable

				PUT my-index-000001
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text",
        "inference_id": ".elser-2-elastic"
      }
    }
  }
}
		

To use a custom inference endpoint instead of the default endpoint, you must Create inference API and specify its inference_id when setting up the semantic_text field type.

				PUT my-index-000002
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text",
        "inference_id": "my-openai-endpoint"
      }
    }
  }
}
		
  1. The inference_id of the inference endpoint to use to generate embeddings.

If you use a custom inference endpoint through your ML node and not through Elastic Inference Service, the recommended way to use semantic_text is by having dedicated inference endpoints for ingestion and search.

This ensures that search speed remains unaffected by ingestion workloads, and vice versa. After creating dedicated inference endpoints for both, you can reference them using the inference_id and search_inference_id parameters when setting up the index mapping for an index that uses the semantic_text field.

				PUT my-index-000003
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text",
        "inference_id": "my-elser-endpoint-for-ingest",
        "search_inference_id": "my-elser-endpoint-for-search"
      }
    }
  }
}