ElasticSearch
Config docker-compose.yml
discovery.type=single-nodeversion: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
container_name: elasticsearch1
environment:
- discovery.type=single-node
- transport.host=localhost
- transport.tcp.port=9300
- http.port=9200
- http.host=0.0.0.0
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
- http.cors.enabled=true
- http.cors.allow-origin=*
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
networks:
- elastic
ports:
- 9200:9200
- 9300:9300
volumes:
- esdata:/usr/share/elasticsearch/data
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://elasticsearch1:9200/
ports:
- 5601:5601
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
networks:
- elastic
apmserver:
image: docker.elastic.co/apm/apm-server:7.8.0
container_name: apm-server
command: --strict.perms=false
ports:
- 8200:8200
- 8201:8200
environment:
- apm-server.host=0.0.0.0
- setup.kibana.host=kibana:5601
- output.elasticsearch.hosts=["elasticsearch:9200"]
- apm-server.secret_token=xxVpmQB2HMzCL9PgBHVrnxjNXXw5J7bd79DFm6sjBJR5HPXDhcF8MSb3vv4bpg44
- setup.template.enabled=true
- logging.to_files=false
# volumes:
# - ${PWD}/configs/apm-server.yml:/usr/share/apm-server/apm-server.yml
networks:
- elastic
networks:
elastic:
name: "elastic"
volumes:
esdata:
driver: localEnable TLS/ HTTPS
Output same as:
Steps for securing the Elastic Stack

Step 1. Preparations
[1-1] Configure /etc/hosts file
Step 2. Create SSL certificates and enable TLS for Elasticsearch on node1
[2-1] Set environment variables (adapt these variables path depending on where and how Elasticsearch was downloaded)
[2-2] Create tmp folder
[2-3] Create instance yaml file
[2-4] Generate CA and server certificates (once Elasticsearch is installed)
[2-5] Unzip the certificates
[2-6] Elasticsearch TLS setup
Step 3. Enable TLS for Kibana on node1
[3-1] Set environment variables
[3-2] Create config and config/certs folder and copy certs (once Kibana is installed)
[3-3] Configure kibana.yml
[3-4] Start Kibana and test Kibana login

Step 4. Enable TLS for Elasticsearch on node2
[4-1] Set environment variables
[4-2] Set up TLS on node2
[4-3] Configure elasticsearch.yml
[4-4] Start and check cluster log
[4-5] Access _cat/nodes API via HTTPS
Step 5. Prepare Logstash users on node1
[5-1] Create logstash_write_role

[5-2] Create logstash_writer user (please change the password for the user logstash_writer)

Step 6. Enable TLS for Logstash on node1
[6-1] Create folder and copy certificates
[6-2] Convert logstash.key to PKCS#8 format for Beats input plugin
[6-3] Configure logstash.yml
[6-4] Create and configure conf.d/example.conf
[6-5] Start Logstash with the example configuration and check the Logstash log

Step 7. Run Filebeat and set up TLS on node1
[7-1] Create a config folder and copy certificates
[7-2] Create a new filebeat.yml
[7-3] Edit your new configuration file filebeat.yml
Step 8. Use Filebeat to ingest data
[8-1] Prepare input log data (logstash-tutorial.log) for Filebeat
[8-2] Start Filebeat
[8-3] Check the log
[8-4] Create index pattern


A few last things...
Last updated