0

I need Elastic Search Query based on following SQL Statement

SELECT * FROM documents
WHERE (doc_name like "%test%" OR doc_type like "%test%" OR doc_desc like "%test%) AND
user_id = 1 AND doc_category = "Utilities"

2 Answers 2

1

It depends on your mapping, but you can start working with something like this:

"query": {
    "filtered": {
        "filter": {
            "bool": {
                "must": [
                    {
                        "term": {
                            "user_id": 1
                        }
                    },
                    {
                        "term": {
                            "doc_category": "Utilities"
                        }
                    }
                ]
            }
        },
        "query": {
            "multi_match": {
                "query": "test",
                "fields": ["doc_name", "doc_type", "doc_desc"]
            }
        }
    }
}
Sign up to request clarification or add additional context in comments.

1 Comment

Thanks for sharing. But I am not getting right result from it. can you please help me out how to add LIKE condition along with it. BOOL Query does not solve my problem.
0

Adding to the answer given by jbasko: Doing LIKE queries in elsaticsearch very much depends on your mapping of the document fields. For example if you want the equivalent of LIKE '%test%' in elasticsearch you need to use the ngram tokenizer for it :

{
 "settings": {
    "analysis": {
      "analyzer": {
        "some_analyzer_name": {
          "tokenizer": "some_tokenizer_name"
        }
      },
      "tokenizer": {
        "some_tokenizer_name": {
          "type": "ngram",
          "min_gram": <minimum number of characters>,
          "max_gram": <maximum number of characters>,
          "token_chars": [
            "letter",
            "digit"
          ]
        }
      }
    }

...

and in the mapping of the fields use the analyzer:

"mapping":{
...
"doc_type" : {
"type" :"string",
"analyzer" : "some_analyzer_name"
},
...
"doc_type" : {
"type" :"string",
"analyzer" : "some_analyzer_name"
},
...
}

A short explanation about ngram, this tokenizer breaks the string in the fields doc_type and the others to small consecutive strings with the amount of characters you defined in settings.

i.e. an ngram with

min_gram : 1  
max_gram : 3  

on the string "abcd".

You'll get a collection of terms: 'a','ab','abc','b','bc','bcd','c','cd','c'. This terms will be used by elasticsearch to find the correct document using a match (or multi-match) query with inverse index.

For further reading you can search for : mapping,ngram tokenizer and termvectors in elasticsearch wiki.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.