How to combine a pattern analyzer and char_filter in elasticsearch -


i have keyword field tokenize (split on commas), may contain values "+" characters. example:

query_string.keywords = living,music,+concerts+and+live+bands,news,portland 

when creating index following nice job of splitting keywords on commas:

{     "settings": {         "number_of_shards": 5,         "analysis": {             "analyzer": {                 "happy_tokens": {                     "type":      "pattern",                     "pattern":   "([,]+)"                 }             }         }     },     "mappings": {         "post" : {             "properties" : {                 "query_string.keywords" : {                     "type": "string",                     "analyzer" : "happy_tokens"                 }             }         }     } } 

how can add char_filter (see below) change +'s spaces or empty strings?

        "char_filter": {             "kill_pluses": {                 "type": "pattern_replace",                 "pattern": "+",                 "replace": ""             }         } 

you need escape "+", "+" has special meaning in regular expressions.

    "char_filter": {         "kill_pluses": {             "type": "pattern_replace",             "pattern": "\+",             "replace": ""         }     } 

Popular posts from this blog