Elastic, Kibana and Logstash

Basic Install Steps: Elastic Search.

https://www.elastic.co/guide/en/elasticsearch/reference/7.9/deb.html#deb-repo

Default install will only listen on localhost – to expose the Elastic search instance to other IP’s need to modify /etc/elasticsearch/elasticsearch.yml as follows: Note this shows an example for a single node cluster and the discovery section would be changed if in a clustered setup.

# ———————————- Network ———————————–

# Set the bind address to a specific IP (IPv4 or IPv6):

# change this line – 0.0.0.0 will listen on any interface

network.host: 0.0.0.0

# Set a custom port for HTTP:

#http.port: 9200

# For more information, consult the network module documentation.

# ——————————— Discovery ———————————-

# Pass an initial list of hosts to perform discovery when this node is started:

# The default list of hosts is [“127.0.0.1”, “[::1]”]

# add this line is as it is required when setting network host or the Elastic service will not start.

discovery.seed_hosts: [“127.0.0.1”, “[::1]”]

Need to create endpoint in SG to send data from SG to be Indexed and searched by Elastic.

Kibana is used to provide pictures of the data from Elastic Search.

Basic Install Steps: Logstash.

Need to install JAVA first and ensure JAVA_HOME is set:

Echo $JAVA_HOME

Should produce something…..

If not add to /etc/environment

Add to PATH cd /usr/lib/jvm/java-8-openjdk-amd64/

https://www.elastic.co/guide/en/logstash/7.9/installing-logstash.html#_apt

Logstash uses to parse log file, and then send to Elastic for searching, these can be then shown as pictures and charts etc. with Kibana.

Basic Install Steps: Kibana.

As per guide on Web – as with ElasticSearch you need to allow outside IP to connect to the server remotely – configuration file is in /etc/kibana/kibana.yml – changes in RED

Kibana is served by a back end server. This setting specifies the port to use.

server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.

# The default is ‘localhost’, which usually means remote machines will not be able to connect.

# To allow connections from remote users, set this parameter to a non-loopback address.

server.host: “0.0.0.0”

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.

# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath

# from requests it receives, and to prevent a deprecation warning at startup.

# This setting cannot end in a slash.

#server.basePath: “”

# Specifies whether Kibana should rewrite requests that are prefixed with

# `server.basePath` or require that they are rewritten by your reverse proxy.

# This setting was effectively always `false` before Kibana 6.3 and will

# default to `true` starting in Kibana 7.0.

#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.

#server.maxPayloadBytes: 1048576

# The Kibana server’s name.  This is used for display purposes.

server.name: “ansiblehost23”

StorageGRID Configuration.

StorageGRID will send data to ElasticSearch for analysis from the buckets, platform services need to be enabled on a per bucket level. This is enabled by default when a new bucket is created – but do check if you are using an existing bucket (under Edit Account) with SG Admin.

Login into the Tenant Manager, and ensure you have a bucket created.

Then choose S3 and create Endpoint – the endpoint points to your ElasticSearch Instance – the URN but is the difficult part as changes depending on where you ES instance is:

Display name – as it sounds

URN – IP and Port of the ES instance, remembering the config change made above for ES (elasticsearch.yml) for the network.host, as by default ES only allows localhost to connect.

For a local one it would be this:

URN: urn:sgws:es:::domain/storagegrid/objects/metadata

Urn – means it’s local (it would be aws if amazon S3)

Sgws – storagegrid

Es – Specifies ElasticSearch

The 3 ‘:::’ are necessary as the first is a separator, and the second two would be where you would enter region and account-id for aws

Domain – required ‘as is’

Storagegrid – does not matter just needs to be unique

Objects – index name – required

Metadata – type of data – required.

Endpoint created – and you can test connection.

Now goto buckets, and add suitable XML to start pushing data out to ES.

Buckets, and configure search integration:

Example shown below:

ID – rule name

Status – Enabled or Disabled

Prefix – what object match – choose * for everything.

Destination – URN from endpoint configuration (remember the correct number of ‘:’)

<MetadataNotificationConfiguration>

    <Rule>

        <ID>Rule-1</ID>

        <Status>Enabled</Status>

        <Prefix></Prefix>

        <Destination>

            <Urn>urn:sgws:es:::domain/storagegrid/objects/metadata</Urn>

        </Destination>

    </Rule>

</MetadataNotificationConfiguration>

Note – you can have multiple rules so data with one prefix is sent to one ES instance, and data with a different prefix is sent somewhere else, remember the ID must be different.