ElasticSearch

Config docker-compose.yml

ERROR: [1] bootstrap checks failed my setup problem was here:

discovery.type=single-node

you should add to change to single node. Have a nice day :)

version: '3.7'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: elasticsearch1
    environment:
      - discovery.type=single-node
      - transport.host=localhost
      - transport.tcp.port=9300
      - http.port=9200
      - http.host=0.0.0.0
      - xpack.security.enabled=false
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      
    ulimits:
      nproc: 65535
      memlock:
        soft: -1
        hard: -1
    cap_add:
      - ALL
    networks:
      - elastic
    ports:
      - 9200:9200
      - 9300:9300
    volumes:
      - esdata:/usr/share/elasticsearch/data
      
  kibana:
    image: docker.elastic.co/kibana/kibana:7.8.0
    container_name: kibana
    environment:
      SERVER_NAME: localhost
      ELASTICSEARCH_URL: http://elasticsearch1:9200/
    ports:
      - 5601:5601
    ulimits:
      nproc: 65535
      memlock:
        soft: -1
        hard: -1
    cap_add:
      - ALL
    networks:
      - elastic
 
  apmserver:
    image: docker.elastic.co/apm/apm-server:7.8.0
    container_name: apm-server
    command: --strict.perms=false
    ports:
      - 8200:8200
      - 8201:8200
    environment:
      - apm-server.host=0.0.0.0
      - setup.kibana.host=kibana:5601
      - output.elasticsearch.hosts=["elasticsearch:9200"]
      - apm-server.secret_token=xxVpmQB2HMzCL9PgBHVrnxjNXXw5J7bd79DFm6sjBJR5HPXDhcF8MSb3vv4bpg44
      - setup.template.enabled=true
      - logging.to_files=false
    # volumes:
    #   - ${PWD}/configs/apm-server.yml:/usr/share/apm-server/apm-server.yml
    networks:
        - elastic

networks:
  elastic:
    name: "elastic"

volumes:
  esdata:
    driver: local

now you can see the volumes bind in /var/lib/docker/volumes folder

For install without docker

[root@ip-172-31-16-171 ~]# vi /etc/elasticsearch/elasticsearch.yml
[root@ip-172-31-16-171 ~]# systemctl restart elasticsearch

xpack.security.enabled: true

[root@ip-172-31-16-171 elasticsearch]# cd /usr/share/elasticsearch/
[root@ip-172-31-16-171 elasticsearch]# ll
total 576
drwxr-xr-x.  2 root root   4096 Sep  9 15:02 bin
drwxr-xr-x.  9 root root   4096 Sep  9 15:02 jdk
drwxr-xr-x.  3 root root   4096 Sep  9 15:02 lib
-rw-r--r--.  1 root root  13675 Sep  1 21:19 LICENSE.txt
drwxr-xr-x. 52 root root   4096 Sep  9 15:02 modules
-rw-rw-r--.  1 root root 544318 Sep  1 21:24 NOTICE.txt
drwxr-xr-x.  2 root root   4096 Sep  1 21:32 plugins
-rw-r--r--.  1 root root   7007 Sep  1 21:19 README.asciidoc
[root@ip-172-31-16-171 elasticsearch]# cd bin/
[root@ip-172-31-16-171 bin]# ll
total 20484
-rwxr-xr-x. 1 root root     2877 Sep  1 21:32 elasticsearch
-rwxr-xr-x. 1 root root      491 Sep  1 21:24 elasticsearch-certgen
-rwxr-xr-x. 1 root root      483 Sep  1 21:24 elasticsearch-certutil
-rwxr-xr-x. 1 root root      996 Sep  1 21:32 elasticsearch-cli
-rwxr-xr-x. 1 root root      433 Sep  1 21:24 elasticsearch-croneval
-rwxr-xr-x. 1 root root     4428 Sep  1 21:32 elasticsearch-env
-rwxr-xr-x. 1 root root     1828 Sep  1 21:32 elasticsearch-env-from-file
-rwxr-xr-x. 1 root root      184 Sep  1 21:32 elasticsearch-keystore
-rwxr-xr-x. 1 root root      440 Sep  1 21:24 elasticsearch-migrate
-rwxr-xr-x. 1 root root      126 Sep  1 21:32 elasticsearch-node
-rwxr-xr-x. 1 root root      172 Sep  1 21:32 elasticsearch-plugin
-rwxr-xr-x. 1 root root      431 Sep  1 21:24 elasticsearch-saml-metadata
-rwxr-xr-x. 1 root root      438 Sep  1 21:24 elasticsearch-setup-passwords
-rwxr-xr-x. 1 root root      118 Sep  1 21:32 elasticsearch-shard
-rwxr-xr-x. 1 root root      441 Sep  1 21:24 elasticsearch-sql-cli
-rwxr-xr-x. 1 root root 20882402 Sep  1 21:24 elasticsearch-sql-cli-7.9.1.jar
-rwxr-xr-x. 1 root root      426 Sep  1 21:24 elasticsearch-syskeygen
-rwxr-xr-x. 1 root root      426 Sep  1 21:24 elasticsearch-users
-rwxr-xr-x. 1 root root      332 Sep  1 21:28 systemd-entrypoint
-rwxr-xr-x. 1 root root      346 Sep  1 21:24 x-pack-env
-rwxr-xr-x. 1 root root      354 Sep  1 21:24 x-pack-security-env
-rwxr-xr-x. 1 root root      353 Sep  1 21:24 x-pack-watcher-env
[root@ip-172-31-16-171 bin]# elasticsearch-setup-passwords interactive
-bash: elasticsearch-setup-passwords: command not found
[root@ip-172-31-16-171 bin]# ./elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
[root@ip-172-31-16-171 bin]#

Enable TLS/ HTTPS

/usr/share/elasticsearch/bin/elasticsearch-certutil  ca

ENTER  123456

/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

123456  ENTER 123456

Output same as:

systemctl stop elasticsearch

/usr/share/elasticsearch/bin/elasticsearch-certutil  ca

ENTER  123456

/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

123456  ENTER 123456

Output same as:

[root@ip-172-31-16-171 elasticsearch]# /usr/share/elasticsearch/bin/elasticsearch-certutil  ca
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]:
Enter password for elastic-stack-ca.p12 :
[root@ip-172-31-16-171 elasticsearch]# /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires an SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA).
    * The tool can automatically generate a new CA for you, or you can provide your own with the
         -ca or -ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you specify any of the following options:
    * -pem (PEM formatted output)
    * -keep-ca-key (retain generated CA key)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files

Enter password for CA (elastic-stack-ca.p12) :
Please enter the desired output file [elastic-certificates.p12]:
Enter password for elastic-certificates.p12 :

Certificates written to /usr/share/elasticsearch/elastic-certificates.p12

This file should be properly secured as it contains the private key for
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.
cp /usr/share/elasticsearch/elastic-certificates.p12 /etc/elasticsearch/
chown root.elasticsearch /etc/elasticsearch/elastic-certificates.p12
chmod 660 /etc/elasticsearch/elastic-certificates.p12

Steps for securing the Elastic Stack

Step 1. Preparations

Download the following components of Elastic Stack 7.1 or later:

[1-1] Configure /etc/hosts file

For this example our node1 has a browser installed, so kibana.local will allow access to the Kibana web page.

# /etc/hosts file for [node1] (we need kibana.local and logstash.local)
127.0.0.1 kibana.local logstash.local
192.168.0.2 node1.elastic.test.com node1
192.168.0.3 node2.elastic.test.com node2
# /etc/hosts file for [node2] (we don't need kibana.local and logstash.local here)
192.168.0.2 node1.elastic.test.com node1
192.168.0.3 node2.elastic.test.com node2

Step 2. Create SSL certificates and enable TLS for Elasticsearch on node1

[2-1] Set environment variables (adapt these variables path depending on where and how Elasticsearch was downloaded)

[root@node1 ~]# ES_HOME=/usr/share/elasticsearch
[root@node1 ~]# ES_PATH_CONF=/etc/elasticsearch

[2-2] Create tmp folder

[root@node1 ~]# mkdir tmp
[root@node1 ~]# cd tmp/
[root@node1 tmp]# mkdir cert_blog

[2-3] Create instance yaml file

[root@node1 cert_blog]# vi ~/tmp/cert_blog/instance.yml
# add the instance information to yml file
instances:
  - name: 'node1'
    dns: [ 'node1.elastic.test.com' ]
  - name: "node2"
    dns: [ 'node2.elastic.test.com' ]
  - name: 'my-kibana'
    dns: [ 'kibana.local' ]
  - name: 'logstash'
    dns: [ 'logstash.local' ]

[2-4] Generate CA and server certificates (once Elasticsearch is installed)

[root@node1 tmp]# cd $ES_HOME
[root@node1 elasticsearch]# bin/elasticsearch-certutil cert --keep-ca-key --pem --in ~/tmp/cert_blog/instance.yml --out ~/tmp/cert_blog/certs.zip

[2-5] Unzip the certificates

[root@node1 elasticsearch]# cd ~/tmp/cert_blog
[root@node1 cert_blog]# unzip certs.zip -d ./certs

[2-6] Elasticsearch TLS setup

[2-6-1] Copy cert file to config folder

[root@node1 ~]# cd $ES_PATH_CONF
[root@node1 elasticsearch]# pwd
/etc/elasticsearch
[root@node1 elasticsearch]# mkdir certs
[root@node1 elasticsearch]# cp ~/tmp/cert_blog/certs/ca/ca* ~/tmp/cert_blog/certs/node1/* certs
[root@node1 elasticsearch]# ll certs
total 12
-rw-r--r--. 1 root elasticsearch 1834 Apr 12 08:47 ca.crt
-rw-r--r--. 1 root elasticsearch 1834 Apr 12 08:47 ca.key
-rw-r--r--. 1 root elasticsearch 1509 Apr 12 08:47 node1.crt
-rw-r--r--. 1 rootRead More

[2-6-2] Configure elasticsearch.yml

[root@node1 elasticsearch]# vi elasticsearch.yml 
## add the following contents
node.name: node1
network.host: node1.elastic.test.com
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.key: certs/node1.key
xpack.security.http.ssl.certificate: certs/node1.crt
xpack.security.http.ssl.certificate_authorities: certs/ca.crt
xpack.security.transport.ssl.keyRead More

[2-6-3] Start and check cluster log

[root@node1 elasticsearch]# grep '\[node1\] started' /var/log/elasticsearch/elasticsearch.log 
[o.e.n.Node               ] [node1] started

[2-6-4] Set built-in user password

[root@node1 elasticsearch]# cd $ES_HOME
[root@node1 elasticsearch]# bin/elasticsearch-setup-passwords auto -u "https://node1.elastic.test.com:9200"
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N] y
Changed password for user apm_system
PASSWORD apm_system = <apm_system_password>
Changed password for user kibana
PASSWORD kibana = <kibana_password>
Changed password for user logstash_system
PASSWORD logstash_system = <logstash_system_password>
Changed password for user beats_system
PASSWORD beats_system = <beats_system_password>
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = <remote_monitoring_user_password>
Changed password for user elastic
PASSWORD elastic = <elastic_password>
Read Less

[2-6-5] Access _cat/nodes API via HTTPS

[root@node1 elasticsearch]# curl --cacert ~/tmp/cert_blog/certs/ca/ca.crt -u elastic 'https://node1.elastic.test.com:9200/_cat/nodes?v'
Enter host password for user 'elastic':
ip          heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.0.2           16          95  10    0.76    0.59     0.38 mdi       *      node1

When we generated our SSL certificates in step 2-4, we provided the --keep-ca-key option which means the certs.zip file contains a ca/ca.key file alongside the ca/ca.crt file. If you ever decide to add more nodes to your Elasticsearch cluster, you'll want to generate additional node certificates, and for that you will need both of those "ca" files as well as the password you used to generate them. Please keep a copy of these ca files and the password used to generate them in a safe place.

Step 3. Enable TLS for Kibana on node1

[3-1] Set environment variables

Adapt these variable paths depending on where and how Kibana was downloaded:

[root@node1 ~]# KIBANA_HOME=/usr/share/kibana
[root@node1 ~]# KIBANA_PATH_CONFIG=/etc/kibana

[3-2] Create config and config/certs folder and copy certs (once Kibana is installed)

Copy the certification files created previously in step 2-4 and paste on kibana/config/certs.

[root@node1 kibana]# ls config/certs
total 12
ca.crt
my-kibana.crt
my-kibana.key

[3-3] Configure kibana.yml

Remember to use the password generated for the built-in user above. You need to replace <kibana_password> with the password that was defined in step 2-6-4.

[root@node1 kibana]# vi kibana.yml 
server.name: "my-kibana"
server.host: "kibana.local"
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/config/certs/my-kibana.crt
server.ssl.key: /etc/kibana/config/certs/my-kibana.key
elasticsearch.hosts: ["https://node1.elastic.test.com:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "<kibana_password>"
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/config/certs/ca.crt" ]

[3-4] Start Kibana and test Kibana login

Access https://kibana.local:5601/ from a browser. Log in using the elastic user and the password that was defined in step 2-6-4. For this example our node1 has a browser installed, so the kibana.local will allow access to Kibana.

Publicly trusted authorities have very strict standards and auditing practices to ensure that a certificate is not created without validating proper identity ownership. For the purpose of this blog post, we will create a self-signed certificate for Kibana (meaning the generated certificate was signed by using its own private key). Due to clients not trusting self-signed Kibana certificates, you will see a message similar to the following in your Kibana logs, until proper trust is established by using certificates generated by an enterprise or public CA (here's the link to the issue in the Kibana repo). This issue does not affect your ability to work in Kibana:

[18:22:31.675] [error][client][connection] Error: 4443837888:error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:../deps/openssl/openssl/ssl/s3_pkt.c:1498:SSL alert number 46

Step 4. Enable TLS for Elasticsearch on node2

[4-1] Set environment variables

[root@node2 ~]# ES_HOME=/usr/share/elasticsearch
[root@node2 ~]# ES_PATH_CONF=/etc/elasticsearch

[4-2] Set up TLS on node2

You can use the scp command to copy certificates from node1 to node2. Both nodes require the certificate and key in order to secure the connection. In a Production environment, it is recommended to use a properly signed key for each node. For demonstration purposes, we are using an automatically generated CA certificate and multi-DNS hostname certificate signed by our generated CA.

[root@node2 ~]# cd $ES_PATH_CONF
[root@node2 elasticsearch]# pwd
/etc/elasticsearch
[root@node2 elasticsearch]# mkdir certs
[root@node2 elasticsearch]# cp ~/tmp/cert_blog/certs/ca/ca.crt ~/tmp/cert_blog/certs/node2/* certs  
[root@node2 elasticsearch]# 
[root@node2 elasticsearch]# ll certs
total 12
-rw-r--r--. 1 root elasticsearch 1834 Apr 12 10:55 ca.crt
-rw-r--r--. 1 root elasticsearch 1509 Apr 12 10:55 node2.crt
-rw-r--r--. 1 root elasticsearch 1675 Apr 12 10:55Read More

[4-3] Configure elasticsearch.yml

[root@node2 elasticsearch]# vi elasticsearch.yml 
node.name: node2
network.host: node2.elastic.test.com
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.key: certs/node2.key
xpack.security.http.ssl.certificate: certs/node2.crt
xpack.security.http.ssl.certificate_authorities: certs/ca.crt
xpack.security.transport.ssl.keyRead More

[4-4] Start and check cluster log

[root@node2 elasticsearch]# grep '\[node2\] started' /var/log/elasticsearch/elasticsearch.log 
[o.e.n.Node               ] [node2] started

[4-5] Access _cat/nodes API via HTTPS

[root@node2 elasticsearch]# curl --cacert ~/tmp/cert_blog/certs/ca/ca.crt -u elastic:<password set previously> 'https://node2.elastic.test.com:9200/_cat/nodes?v'
ip          heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.0.2           25          80   5    0.18    0.14     0.30 mdi       *      node1
192.168.0.3           14          96  44    0.57    0.47     0.25 mdi       -      node2

Step 5. Prepare Logstash users on node1

[5-1] Create logstash_write_role

You can create the role multiple ways.

You can create it by using the Kibana Roles UI:

Or create it by using the API in the Kibana Dev Tools tab:

POST /_security/role/logstash_write_role
{
    "cluster": [
      "monitor",
      "manage_index_templates"
    ],
    "indices": [
      {
        "names": [
          "logstash*"
        ],
        "privileges": [
          "write",
          "create_index"
        ],
        "field_security": {
          "grant": [
            "*"
          ]
        }
      }
    ],
    "run_as": [],
    "metadata": {},
    "transient_metadata": {
      "enabled": true
    }
}

And you'll get the response:

{"role":{"created":true}}

The users who have this role assigned will not be able to delete any document. This role restricts the users to create indices only if they start with logstash or index documents on those indices.

Note for ILM users: For the logstash_writer_role to work with index lifecycle management (ILM) — enabled by default in 7.3+ — the following privileges must be included:

[5-2] Create logstash_writer user (please change the password for the user logstash_writer)

You can create the user multiple ways.

You can create it by using the Kibana Users UI:

Or create it by using the API in the Kibana Dev Tools tab:

POST /_security/user/logstash_writer
{
  "username": "logstash_writer",
  "roles": [
    "logstash_write_role"
  ],
  "full_name": null,
  "email": null,
  "password": "<logstash_system_password>",
  "enabled": true
}

And you'll get the response:

{"user":{"created":true}}

Step 6. Enable TLS for Logstash on node1

[6-1] Create folder and copy certificates

[root@node1 logstash]# ls -l
total 24
ca.crt
logstash.crt
logstash.key

[6-2] Convert logstash.key to PKCS#8 format for Beats input plugin

[root@node1 logstash]# openssl pkcs8 -in config/certs/logstash.key -topk8 -nocrypt -out config/certs/logstash.pkcs8.key

[6-3] Configure logstash.yml

Remember to use the auto-generated password for logstash_system user. Use the password defined in step 2-6-4.

[root@node1 logstash]# vi logstash.yml

Then edit:

node.name: logstash.local
path.config: /etc/logstash/conf.d/*.conf
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: '<logstash_system_password>'
xpack.monitoring.elasticsearch.hosts: [ 'https://node1.elastic.test.com:9200' ]
xpack.monitoring.elasticsearch.ssl.certificate_authority: /etc/logstash/config/certs/ca.crt

[6-4] Create and configure conf.d/example.conf

On the Elasticsearch output use the password defined in step 5-2.

[root@node1 logstash]# vi conf.d/example.conf 
input {
  beats {
    port => 5044
    ssl => true
    ssl_key => '/etc/logstash/config/certs/logstash.pkcs8.key'
    ssl_certificate => '/etc/logstash/config/certs/logstash.crt'
  }
}
output {
  elasticsearch {
    hosts => ["https://node1.elastic.test.com:9200","https://node2.elastic.test.com:9200"]
    cacert => '/etc/logstash/config/certs/ca.crt'
    user => 'logstash_writer'
    passwordRead More

[6-5] Start Logstash with the example configuration and check the Logstash log

We should see the following log messages:

[INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x640c14d2@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 run

And in the Kibana Monitoring tab, Logstash will be displayed (node info, pipeline settings, OS info, JVM info, process stats, and pipeline runtime stats):

Step 7. Run Filebeat and set up TLS on node1

[7-1] Create a config folder and copy certificates

[root@node1 filebeat]# mkdir config
[root@node1 filebeat]# mkdir config/certs
[root@node1 filebeat]# cp ~/tmp/cert_blog/certs/ca/ca.crt config/certs
[root@node1 filebeat]# ll config/certs/
total 4
-rw-r--r--. 1 root root 1834 Apr 20 00:18 ca.crt

[7-2] Create a new filebeat.yml

[root@node1 filebeat]# pwd
/etc/filebeat
[root@node1 filebeat]# mv filebeat.yml filebeat.yml.old

[7-3] Edit your new configuration file filebeat.yml

filebeat.inputs:
- type: log
  paths:
    - /etc/filebeat/logstash-tutorial-dataset
output.logstash:
  hosts: ["logstash.local:5044"]
  ssl.certificate_authorities:
    - /etc/filebeat/config/certs/ca.crt

Step 8. Use Filebeat to ingest data

[8-1] Prepare input log data (logstash-tutorial.log) for Filebeat

First, download the input log data.

[root@node1 filebeat]# pwd
/etc/filebeat
[root@node1 filebeat]# mkdir logstash-tutorial-dataset
[root@node1 filebeat]# cp /root/logstash-tutorial.log logstash-tutorial-dataset/.
[root@node1 filebeat]# ll logstash-tutorial-dataset/
total 24
-rwxr-x---. 1 root root 24464 Apr 20 00:29 logstash-tutorial.log

[8-2] Start Filebeat

[root@node1 filebeat]# systemctl start filebeat
[root@node1 filebeat]# systemctl enable filebeat
Created symlink from /etc/systemd/system/multi-user.target.wants/filebeat.service to /usr/lib/systemd/system/filebeat.service.

[8-3] Check the log

We should see the following log messages:

INFO    log/harvester.go:216    Harvester started for file: /etc/filebeat/logstash-tutorial-dataset/logstash-tutorial.log

[8-4] Create index pattern

Next, create an index pattern that matches the data that is being ingested. This will allow visualizing the data in Kibana, such as with Graph or Discover.

Then select the Time Filter field name. In our example, this is @timestamp:

And that’s it! You have encrypted the communication between the different parts of the Elastic Stack, and now you’re safely and securely ingesting log data.

A few last things...

If you run into any issues while configuring security, the first place we’d recommend turning is the security troubleshooting guide in our documentation. It can help with many common issues. If you still have questions after that, you should check out our Elastic forums for additional help. Or if you want to talk to the Elastic Support team directly, start an Elastic subscription today and have direct access to a team of experts. Be safe out there!

Last updated