By continuing to navigate on this website, you accept the use of cookies to serve you more relevant services & content.
For more information and to change the setting of cookies on your computer, please read our Cookie Policy.

Elastic Search Security Measures

After Installing Elastic Search in production, many new developers leave it unconfigured. In this Post, we will look into important config file changes and its implications.

cluster.name:

By default elastic search's cluster name is elastic search.  It is mandatory to keep it unique per cluster (if you plan on one) or single endpoint. All nodes join the same clustername, so in case of production, chances of multiple VPS which has cluster name as elasticsearch forming a cluster is High, meaning your data which should be limited to single endpoint or VPS is distributed among all the nodes and you might not even know it.

what happens if cluster.name is not changed:

  1. Time taken for elasticsearch to start will be more.
  2. Slower Indexing and Results (since data is distributed).
  3. If one of the endpoints with which we formed cluster unknowingly is missing that means you have incomplete data.

node.name:

Although not mandatory in SIngle Endpoints/VPS, when it comes to cluster you need to have identification of nodes joined. so setting this option in config file and restarting elasticsearch starts with your given node name.

path.data:

    If you dont want to store elasticsearch data in default location or if you're using NFS or encrypted disk. Then option should be set to your mounted path.

network.host:

This option wouldn't matter unless you're using private and public networks. Like AWS or if you want your endpoint to serve your instance on Public Interface.

If you set it to private address like 192.168.x.x then its access is restricted to endpoints in that private network only.

**Note: If you want to use it only on localhost set it as 127.0.0.1, else this will be exposed on public network.

Others:

  1. When you're exposing it on Public network, make sure you have some authentication and authorization plugin like shield is used, or your data is at RISK.
  2. when forming a cluster if you don't set proper number of master nodes then chances of the cluster entering split brain issue is high and you cluster might lose data.
  3. if you're planning for a cluster then keeping data nodes separate and having at least 1 replica will keep your cluster in high availability state and also serves as Disaster Management technique.

 

    Posted On
  • 05 November 2014
  • By
  • Micropyramid

Need any Help in your Project?Let's Talk

Latest Comments
Related Articles
Ansible Galaxy Introduction.

Ansible Galaxy is the hub of ansible scripts contributed by users. To follow this article its important that you know about ansible. We have a ...

Continue Reading...
How to backup and restore mysql, postgresql and mongodb databases

Data loss can happen when we accidentally delete the files, or when server crashes or system fails, or when we applied migrations to the data ...

Continue Reading...
Letsencrypt wildcard - Setup wildcard subdomain using letsencrypt and certbot

To get wildcard supported certificates, we need to pass the challenge which requires adding TXT records in your dns records.
To get certificates for single ...

Continue Reading...
open source packages

Subscribe To our news letter

Subscribe and Stay Updated about our Webinars, news and articles on Django, Python, Machine Learning, Amazon Web Services, DevOps, Salesforce, ReactJS, AngularJS, React Native.
* We don't provide your email contact details to any third parties