ElasticSearch RPM Installation
ElasticSearch is a Database specialized on fulltext-search and therefor especially useful for things like logs and other long texts. It uses Apache Lucene under the hood and has a somewhat open-source licence. If you are looking for a completly copyleft solution, have a look at OpenSearch, a fork of ElasticSearch.
ElasticSearch is designed to be used as a cluster, but you can also create single-node-clusters. You can also expand a single-node-cluster to a highly available multi-node-cluster.
This Article will show how to install ElasticSearch on a RedHat Based Linux System.
Add Repository
First of all we need to add the RPM-Repository to be able to download the required Packages. We will create a new repository file with the following information.
$ sudo vim /etc/yum.repos.d/elastic.repo
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Install RPM
Next we can install the ElasticSearch-RPM. During the installation the password for the elastic user will be displayed. Note it down. This will be the administration password for this instance.
$ sudo dnf install elasticsearch
# This will output your elastic user password
# BV4Ji*n=G_QEIgg-OkjV
Adjust Configuration
Before we start the Instance we need to add some modifications to the ElasticSearch configuration file.
$ sudo vim /etc/elasticsearch/elasticsearch.yml
cluster.name: my-es-cluster
node.name: elastic-01
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["elastic-01"]
The cluster.name
is the name of the cluster. You can choose it freely.
The node.name
is usually your hostname and is used to identify the nodes of the cluster. Even if you have only one node ElasticSearch still works as a cluster. Therefor it has to be set.
The network.host
configures which ip address should listen for connections. Usually you would set it to 0.0.0.0
to listen on all interfaces of the server.
The http.port
set the listen port. The default is 9200
.
Last the cluster.initial_master_nodes
is a list of all nodes, that should be part of the cluster at the initialization of the cluster. In this case we will just use the one host to create a single-node-cluster.
Start Elasticsearch
Now that our Instance is configured, we can start the service.
$ sudo systemctl enable --now elasticsearch
Check Cluster
Now that our instance should be running, we can use the api to check it.
$ curl -u elastic:BV4Ji*n=G_QEIgg-OkjV https://localhost:9200 -k
{
"name" : "elastic-01",
"cluster_name" : "my-es-cluster",
"cluster_uuid" : "nmMuslJpT3GoSlciqcRjKg",
"version" : {
"number" : "8.17.3",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "a091390de485bd4b127884f7e565c0cad59b10d2",
"build_date" : "2025-02-28T10:07:26.089129809Z",
"build_snapshot" : false,
"lucene_version" : "9.12.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
$ curl -u elastic:BV4Ji*n=G_QEIgg-OkjV https://localhost:9200/_cluster/health -k | jq
{
"cluster_name": "my-es-cluster",
"status": "green",
"timed_out": false,
"number_of_nodes": 1,
"number_of_data_nodes": 1,
"active_primary_shards": 3,
"active_shards": 3,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"unassigned_primary_shards": 0,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 100.0
}
Remove bootstrap config
The next step would be to remove the bootstrap parameter from our configuration. Otherwise ElasticSearch would reinitialize on the next service start and overwrite our existing cluster. Therefor we need to remove or comment the cluster.initial_master_nodes
directive from the configuration file.
$ sudo vim /etc/elasticsearch/elasticsearch.yml
#cluster.initial_master_nodes: ["elastic-01"]
Open Firewall Ports
To connect to our ElasticSearch from other hosts, we need to open the firewall ports.
$ sudo firewall-cmd --add-port=9200/tcp
$ sudo firewall-cmd --add-port=9300/tcp
$ sudo firewall-cmd --runtime-to-permanent
Install Kibana
Kibana is a nice frontend to display data from an ElasticSearch Instance. It may be compared with Grafana. But it also offers some graphical information about the status of our ElasticSearch Instance and can be used as a webui for it. Thats why will install it too.
Since we already add the repository for our ElasticSearch Instance, we can simply install it.
$ sudo dnf install kibana
Set Password for Kibana User
Next we will set a passwort for our kibana user to connect to the ElasticSearch. This is also done via the REST API of ElasticSearch.
$ sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system
# ZPrGZNJy-lctel9d*QVs
Save the output. It is the password for the kibana_system
user.
Ajdust Configuration
Next we need to adjust the kibana configuration.
$ sudo vim /etc/kibana/kibana.yml
server.port: 5601
server.host: 0.0.0.0
elasticsearch.hosts: ["https://localhost:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "4KDylSd_N67X+CcvjRBy"
elasticsearch.ssl.verificationMode: none
server.port
and server.host
serve the same function as for the ElasticSearch configuration.elasticsearch.hosts
lists the ElasticSearch hosts. In our case, we can use localhost.
elasticsearch.username
and elasticsearch.password
are thre credentials we created in the last step.
Lastly the elasticsearch.ssl.verificationMode
is disabled, since I did not create trusted certificates for this instance.
Start Kibana Service
Now that everything is configured, we can start the kibana service.
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now kibana
Open Firewall Ports
And also open the firewall ports.
$ sudo firewall-cmd --add-port=5601/tcp
$ sudo firewall-cmd --runtime-to-permanent
Now you can access Kibana on http://localhost:5601/
with the credentials elastic
:<your-password>
.
Add additional Nodes
If you want to add more nodes to you cluster for increased performance or redundancy, you can follow the steps. This is not necessary to use the cluster.
Add Repository
On the new node we need to add the repository, just as we did on the first.
$ sudo vim /etc/yum.repos.d/elastic.repo
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Install RPM
Then install the elasticsearch rpm.
$ sudo dnf install elasticsearch
Open Firewall Ports
And open the firewall ports.
$ sudo firewall-cmd --add-port=9200/tcp
$ sudo firewall-cmd --add-port=9300/tcp
$ sudo firewall-cmd --runtime-to-permanent
User Token to Join
Now we need to go back to our first node and create a join-token
. That way the new node can authenticate it self.
# elastic-01
[admin@elastic-01]$ sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
On the new node we can input this toke like so.
# elastic-02
[admin@elastic-02]$ sudo /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <your-join-token>
# confirm with y
Adjust configuration
But there are still some configuration-changes left.
$ sudo vim /etc/elasticsearch/elasticsearch.yml
# cluster.name has to be identical on all nodes in the cluster
cluster.name: my-es-cluster
# node.name has to be unique to every node
node.name: elastic-02
network.host: 0.0.0.0
http.port: 9200
Mainly the cluster.name
should match the one defined on the first node and the node.name
has to be unique from the first node. Again preferably the hostname.
Start elasticsearch
Now we can start the service.
$ sudo systemctl enable --now elasticsearch
Check if Node successfuly joined
And if we query the api for all the nodes, that are part of the cluster, we should see number_of_nodes: 2
.
curl -u elastic:BV4Ji*n=G_QEIgg-OkjV https://localhost:9200/_cluster/health -k | jq
{
"cluster_name": "my-es-cluster",
"status": "green",
"timed_out": false,
"number_of_nodes": 2,
"number_of_data_nodes": 2,
"active_primary_shards": 36,
"active_shards": 72,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"unassigned_primary_shards": 0,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 100.0
}
Repeate these steps for every other node you want to add to the cluster.