ElasticSearch
Config docker-compose.yml
ERROR: [1] bootstrap checks failed my setup problem was here:
discovery.type=single-nodeyou should add to change to single node. Have a nice day :)
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
container_name: elasticsearch1
environment:
- discovery.type=single-node
- transport.host=localhost
- transport.tcp.port=9300
- http.port=9200
- http.host=0.0.0.0
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
- http.cors.enabled=true
- http.cors.allow-origin=*
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
networks:
- elastic
ports:
- 9200:9200
- 9300:9300
volumes:
- esdata:/usr/share/elasticsearch/data
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://elasticsearch1:9200/
ports:
- 5601:5601
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
networks:
- elastic
apmserver:
image: docker.elastic.co/apm/apm-server:7.8.0
container_name: apm-server
command: --strict.perms=false
ports:
- 8200:8200
- 8201:8200
environment:
- apm-server.host=0.0.0.0
- setup.kibana.host=kibana:5601
- output.elasticsearch.hosts=["elasticsearch:9200"]
- apm-server.secret_token=xxVpmQB2HMzCL9PgBHVrnxjNXXw5J7bd79DFm6sjBJR5HPXDhcF8MSb3vv4bpg44
- setup.template.enabled=true
- logging.to_files=false
# volumes:
# - ${PWD}/configs/apm-server.yml:/usr/share/apm-server/apm-server.yml
networks:
- elastic
networks:
elastic:
name: "elastic"
volumes:
esdata:
driver: localnow you can see the volumes bind in /var/lib/docker/volumes folder
For install without docker
Enable TLS/ HTTPS
Output same as:
Steps for securing the Elastic Stack

Step 1. Preparations
Download the following components of Elastic Stack 7.1 or later:
[1-1] Configure /etc/hosts file
For this example our node1 has a browser installed, so kibana.local will allow access to the Kibana web page.
Step 2. Create SSL certificates and enable TLS for Elasticsearch on node1
[2-1] Set environment variables (adapt these variables path depending on where and how Elasticsearch was downloaded)
[2-2] Create tmp folder
[2-3] Create instance yaml file
[2-4] Generate CA and server certificates (once Elasticsearch is installed)
[2-5] Unzip the certificates
[2-6] Elasticsearch TLS setup
[2-6-1] Copy cert file to config folder
[2-6-2] Configure elasticsearch.yml
[2-6-3] Start and check cluster log
[2-6-4] Set built-in user password
[2-6-5] Access _cat/nodes API via HTTPS
When we generated our SSL certificates in step 2-4, we provided the --keep-ca-key option which means the certs.zip file contains a ca/ca.key file alongside the ca/ca.crt file. If you ever decide to add more nodes to your Elasticsearch cluster, you'll want to generate additional node certificates, and for that you will need both of those "ca" files as well as the password you used to generate them. Please keep a copy of these ca files and the password used to generate them in a safe place.
Step 3. Enable TLS for Kibana on node1
[3-1] Set environment variables
Adapt these variable paths depending on where and how Kibana was downloaded:
[3-2] Create config and config/certs folder and copy certs (once Kibana is installed)
Copy the certification files created previously in step 2-4 and paste on kibana/config/certs.
[3-3] Configure kibana.yml
Remember to use the password generated for the built-in user above. You need to replace <kibana_password> with the password that was defined in step 2-6-4.
[3-4] Start Kibana and test Kibana login
Access https://kibana.local:5601/ from a browser. Log in using the elastic user and the password that was defined in step 2-6-4. For this example our node1 has a browser installed, so the kibana.local will allow access to Kibana.

Publicly trusted authorities have very strict standards and auditing practices to ensure that a certificate is not created without validating proper identity ownership. For the purpose of this blog post, we will create a self-signed certificate for Kibana (meaning the generated certificate was signed by using its own private key). Due to clients not trusting self-signed Kibana certificates, you will see a message similar to the following in your Kibana logs, until proper trust is established by using certificates generated by an enterprise or public CA (here's the link to the issue in the Kibana repo). This issue does not affect your ability to work in Kibana:
Step 4. Enable TLS for Elasticsearch on node2
[4-1] Set environment variables
[4-2] Set up TLS on node2
You can use the scp command to copy certificates from node1 to node2. Both nodes require the certificate and key in order to secure the connection. In a Production environment, it is recommended to use a properly signed key for each node. For demonstration purposes, we are using an automatically generated CA certificate and multi-DNS hostname certificate signed by our generated CA.
[4-3] Configure elasticsearch.yml
[4-4] Start and check cluster log
[4-5] Access _cat/nodes API via HTTPS
Step 5. Prepare Logstash users on node1
[5-1] Create logstash_write_role
You can create the role multiple ways.
You can create it by using the Kibana Roles UI:

Or create it by using the API in the Kibana Dev Tools tab:
And you'll get the response:
The users who have this role assigned will not be able to delete any document. This role restricts the users to create indices only if they start with logstash or index documents on those indices.
Note for ILM users: For the logstash_writer_role to work with index lifecycle management (ILM) — enabled by default in 7.3+ — the following privileges must be included:
[5-2] Create logstash_writer user (please change the password for the user logstash_writer)
You can create the user multiple ways.
You can create it by using the Kibana Users UI:

Or create it by using the API in the Kibana Dev Tools tab:
And you'll get the response:
Step 6. Enable TLS for Logstash on node1
[6-1] Create folder and copy certificates
[6-2] Convert logstash.key to PKCS#8 format for Beats input plugin
[6-3] Configure logstash.yml
Remember to use the auto-generated password for logstash_system user. Use the password defined in step 2-6-4.
Then edit:
[6-4] Create and configure conf.d/example.conf
On the Elasticsearch output use the password defined in step 5-2.
[6-5] Start Logstash with the example configuration and check the Logstash log
We should see the following log messages:
And in the Kibana Monitoring tab, Logstash will be displayed (node info, pipeline settings, OS info, JVM info, process stats, and pipeline runtime stats):

Step 7. Run Filebeat and set up TLS on node1
[7-1] Create a config folder and copy certificates
[7-2] Create a new filebeat.yml
[7-3] Edit your new configuration file filebeat.yml
Step 8. Use Filebeat to ingest data
[8-1] Prepare input log data (logstash-tutorial.log) for Filebeat
First, download the input log data.
[8-2] Start Filebeat
[8-3] Check the log
We should see the following log messages:
[8-4] Create index pattern
Next, create an index pattern that matches the data that is being ingested. This will allow visualizing the data in Kibana, such as with Graph or Discover.

Then select the Time Filter field name. In our example, this is @timestamp:

And that’s it! You have encrypted the communication between the different parts of the Elastic Stack, and now you’re safely and securely ingesting log data.
A few last things...
If you run into any issues while configuring security, the first place we’d recommend turning is the security troubleshooting guide in our documentation. It can help with many common issues. If you still have questions after that, you should check out our Elastic forums for additional help. Or if you want to talk to the Elastic Support team directly, start an Elastic subscription today and have direct access to a team of experts. Be safe out there!
Last updated
Was this helpful?
