James Kiarie https://www.linuxtechi.com Sun, 08 Oct 2023 16:07:36 +0000 en-US hourly 1 https://www.linuxtechi.com/wp-content/uploads/2020/02/cropped-linuxtechi-favicon-32x32.png James Kiarie https://www.linuxtechi.com 32 32 How to Install and Configure VNC Server on RHEL 9 https://www.linuxtechi.com/install-configure-vnc-server-on-rhel-9/ https://www.linuxtechi.com/install-configure-vnc-server-on-rhel-9/#respond Sun, 08 Oct 2023 16:07:36 +0000 https://www.linuxtechi.com/?p=16385 In this guide, we demonstrate how to install and configure VNC server on RHEL 9. Virtual Network Computing, popularly known as VNC, is a network protocol for accessing graphical desktops of remote systems. The protocol transfers the mouse and keyboard inputs back and forth between ... Read more

The post How to Install and Configure VNC Server on RHEL 9 first appeared on LinuxTechi.]]>
In this guide, we demonstrate how to install and configure VNC server on RHEL 9.

Virtual Network Computing, popularly known as VNC, is a network protocol for accessing graphical desktops of remote systems. The protocol transfers the mouse and keyboard inputs back and forth between the server, which is the system being remotely accessed and the client, which is the device used to access or connect to the remote server.

The VNC protocol is beneficial to IT support teams when it comes to offering assistance to remote teams. It’s lightweight and uses negligible CPU and RAM. As such it it can be run on low-specification hardware.

Prerequisites

  • Pre-Installed RHEL 9 System with Desktop Environment
  • Sudo user with admin rights
  • Red Hat Subscription or locally configured yum repository.

Without any delay, let’s deep dive into VNC server installation and configuration steps.

1) Install VNC Server on RHEL 9

The first step is to install TigerVNC (or VNC server) on your instance of RHEL 9. To do so, log in and, first, update the system.

$ sudo dnf update -y

Reboot your system all the updates are installed.

$ sudo reboot

Next, install TigerVNC as shown.

$ sudo dnf install tigervnc-server -y

Install-VNC-Server-RHEL9-DNF-Command

The command installs TigerVNC alongside other additional packages and dependencies.

2) Configure TigerVNC

The next step is to set up TIgerVNC to allow remote users access to our Red Hat desktop environment.  To proceed, copy the default configuration file which is the vncserver@.service file to the /etc/systemd/system directory. Be sure to also include the display number on which to listen to the VNC service.

Here, we will specify the display number as 3.

$ sudo cp /usr/lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:3.service

Next, you need to set a username for a regular login user. To do so, edit the VNC users’ file as follows.

$ sudo vim /etc/tigervnc/vncserver.users

Add the display number and user.

:3=linuxtechi

VNC-Display-Number-RHEL9

Save the changes and exit the configuration file. Next, set the user’s password using the command:

$ vncpasswd

Provide the password and verify it. When asked whether to enter a view-only password, decline by typing ‘n’ and hit ENTER.

Set-VNC-Password-RHEL9

Next, configure GNOME to be the current session. This will be the session for the VNC server.

$ echo gnome-session > ~/.session

Thereafter, create a configuration file inside the VNC hidden directory which resides in the home directory.

$ vim ~/.vnc/config

Add the following configuration.

session=gnome
securitytypes=vncauth,tlsvnc
geometry=1280x720

The first line tells VNC to use the GNOME environment.The second line specifies the standard security for VNC during authentication. In the last line, the ‘geometry‘ flag specifies the screen resolution. You can set this to your preferred resolution.

Save the changes and exit the configuration file.

3) Start and Enable VNC Service

Next, start and enable the VNC service.

$ sudo systemctl start vncserver@:3.service
$ sudo systemctl enable vncserver@:3.service

Start-Enable-VNC-Server-Service-RHEL9

4) Configure Firewall

Be sure to also allow the VNC server service across the firewall.

$ sudo firewall-cmd --permanent --add-service=vnc-server
$ sudo firewall-cmd --permanent --add-port=5902/tcp

Next, reload the firewall to apply the changes.

$ sudo firewall-cmd --reload

Firewall-Rules-for-VNC-Server-RHEL9

5) Access the VNC Server From a Remote System

With the VNC server already configured, the last step is to access it remotely from a different system. But first, check the IP address of the VNC server using ip command:

$ ip a

Check-IP-Address-IP-Command-RHEL9

Next, install the TigerVNC client application by downloading the binary file from source Forge

Launch the application and provide the VNC server IP and display number as shown. Then click ‘Connect’.

VNC-viewer-Connect-Remote-System

Next, provide the VNC user’s password and hit ENTER.

Enter-VNC-Password-for-VNC-Server-RHEL9

Finally, you will see the login screen shown.

RHEL-Login-Screen-Post-VNC-Viewer-Connection

From here you can, straightaway, log in using the user’s login password.

RHEL-9-Desktop-Enviornment-via-VNC-Viewer

Conclusion

We have demonstrated how to install and set up VNC Server on RHEL 9. We hope you found this useful. Feel free to weigh in with your feedback.

The post How to Install and Configure VNC Server on RHEL 9 first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/install-configure-vnc-server-on-rhel-9/feed/ 0
How to Install Kali Linux on Windows 11 using WSL https://www.linuxtechi.com/install-kali-linux-on-windows-wsl/ https://www.linuxtechi.com/install-kali-linux-on-windows-wsl/#respond Wed, 30 Aug 2023 11:15:36 +0000 https://www.linuxtechi.com/?p=16061 In this post, we will show you how to install Kali Linux on Windows 11 using WSL.  A while back, running a virtual machine was the only way of running another operating system, such as Linux, inside Windows. The drawback of virtual machines is that ... Read more

The post How to Install Kali Linux on Windows 11 using WSL first appeared on LinuxTechi.]]>
In this post, we will show you how to install Kali Linux on Windows 11 using WSL. 

A while back, running a virtual machine was the only way of running another operating system, such as Linux, inside Windows. The drawback of virtual machines is that they are associated with high resource usage which in most cases slows down applications and services on the host especially if the underlying host has low computing specifications.

The Windows Subsystem for Linux, commonly abbreviated as WSL, is an abstraction layer that lets you run a Linux environment, including its utilities and tools, directly on Windows. It does this without the resource overhead of traditional virtual machines or having to configure a dual-boot setup. WSL was first released in 2016. WSL 2 is the current version and provides performance improvements and other enhancements to boost system performance.

Prerequisites

For this to work, ensure that your system meets the following requirements:

  • Virtualization needs to be enabled on your BIOS.
  • Must be running x64 system Version 1903 or higher, with Build 18362 or higher.

Without much ado, let’s get started.

1) Enable Windows Subsystem for Linux (WSL)

The first step is to enable the WSL feature on your Windows system. To do this, launch Windows PowerShell as the Administrator.

Launch-PowerShell-Windows11

Next, run the following command on the terminal to enable the WSL feature.

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart

This takes a minute or two to complete.

2) Enable Virtual Machine Platform Feature

The virtual Machine Platform feature is a feature that allows you to emulate other operating systems and is a prerequisite for specific features to run such as the WSL.

Still on Powershell, run the following command to enable the virtual machine platform feature.

dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

PowerShell-Command-to-enable-virtual-machine-feature

In addition, (although not a requirement) consider setting WSL version 2 as the default version.

wsl --set-default-version 2

Set-Wsl-Version-2-Command-Powershell

3) Install the Linux Kernel Update package.

To avoid running into errors while running Kali Linux, it’s recommended to install the Linux Kernel Update Package for WSL. Therefore click on this link to download the WSL2 Linux kernel update package for x64 machines.

Once you have downloaded the installer, run it by double-clicking on it.

Download-WSL-Update-Windows11

This opens the WSL setup wizard as shown. Click ‘Next’ and accept the default selections.

WSL-Setup-Wizard-Windows

4) Install Kali Linux on Windows using WSL

All our ducks are in a row as far as the requisite features needed are concerned. The next step is to install Kali Linux. So, head over to the Microsoft store and search for the ‘Kali Linux’ app.

Click the ‘Get’ button.

Choose-Get-Kali-Linux-Windows-WSL

The download of the App, which is about 237MB will begin and the progress will be indicated as shown.

Kali-Linux-Download-Progress-Windows-WSL

Once the download is complete, click ‘Open’.

Click-Open-Kali-Linux-WSL-Windows

This opens a terminal window that prompts you to provide a username and password for your user account.  Once you are done, you’ll automatically be logged in and prompted with a bash in Kali Linux.

Windows-WSL-Kali-Linux-Prompt-UserName-Password

You can verify the version of Kali Linux using the command:

$ cat /etc/os-release

Verify-KaliLinux-Version-WSL-Console

5) Install Win-Kex Utility to Enable GUI (Optional)

To make the most out of your Kali Linux instance, installing a GUI is recommended. To do so, install the Win-Kex package which provides a desktop experience for Kali Linux in WSL 2.

$ sudo apt update
$ sudo apt install kali-win-kex

The package is quite big (approximately 2.3G) and takes quite some time to install.

Install-Kali-Win-Kex-Windows-WSL

Once installed, launch the GUI interface as shown.

$ kex --win -s

Provide the password and confirm it. When prompted to enter a view-only password, type ‘n’ and hit ENTER.

Start-kaliLinux-Win-Kex-Windows-WSL

This immediately launches the Kali Linux GUI on your screen as shown.

Kali-Linux-GUI-Windows-WSL

The Win-Kex utility provides three modes:

  • Window mode: Starts Kali Linux in a dedicated window.
  • Enhanced session mode: Uses RDP protocol to provide a richer UI experience.
  • Seamless mode: Shares the Windows desktop between Windows and Kali applications and menus.

For more information about the Win-Kex utility, check out the Kali Linux Win-Kex documentation.

Conclusion

And there you go. In this guide, we have demonstrated how to install Kali Linux on Windows 11 using WSL 2. Your feedback on this guide is welcome.

The post How to Install Kali Linux on Windows 11 using WSL first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/install-kali-linux-on-windows-wsl/feed/ 0
How to Setup OpenLDAP Server on Ubuntu 22.04 https://www.linuxtechi.com/setup-openldap-server-on-ubuntu/ https://www.linuxtechi.com/setup-openldap-server-on-ubuntu/#respond Wed, 16 Aug 2023 04:27:52 +0000 https://www.linuxtechi.com/?p=16005 OpenLDAP is a free and open-source implementation of LDAP (Lightweight Directory Access Protocol). It’s a highly customizable LDAP option that comes complete with a software suite for LDAP servers which includes slapd (standalone LDAP daemon), slurpd (standalone LDAP update replication daemon), and other tools, libraries, ... Read more

The post How to Setup OpenLDAP Server on Ubuntu 22.04 first appeared on LinuxTechi.]]>
OpenLDAP is a free and open-source implementation of LDAP (Lightweight Directory Access Protocol). It’s a highly customizable LDAP option that comes complete with a software suite for LDAP servers which includes slapd (standalone LDAP daemon), slurpd (standalone LDAP update replication daemon), and other tools, libraries, clients, and utilities used for managing LDAP servers.

In this guide, we focus on how to setup OpenLDAP server on Ubuntu 22.04 step-by-step.

Prerequisites

To follow along in this guide, ensure that you have the following in place:

  • An Instance of Ubuntu 22.04
  • SSH connection to the server

1) Setup Server Hostname

Right off the bat, you need to configure the hostname or Fully Qualified Doman Name ( FQDN) for your server.  In this guide, we will configure the OpenLDAP server with the hostname ldap.linuxtechi.com and the IP address 172.105.152.235.

Run the following command as root. Be sure to replace ldap.linuxtechi.com with your preferred server domain and hostname

# hostnamectl set-hostname ldap.linuxtechi.com

Next, update the /etc/hosts file with the server hostname and corresponding IP address for hostname resolution within the network.

# echo '172.105.152.235 ldap.linuxtechi.com' >> /etc/hosts

Now ping the server hostname and you will get a positive ping output.

# ping -c 3 ldap.linuxtechi.com

Set-Hostname-Ubuntu-Openldap

2) Install OpenLDAP Packages

The next step is to install OpenLDAP. To do so run the following command to install the OpenLDAP packages.

# apt install slapd ldap-utils

Apt-Install-slapd-Ubuntu

During the installation, you will be prompted to configure administrator password for your LDAP server. Provide a strong one and hit ENTER.

Ldap-Server-Admin-Password-Ubuntu

Next, re-enter the password to confirm your password and hit ENTER.

Confirm-Ldap-Admin-Passsword-Ubuntu

3)  Setup OpenLDAP Server

Once OpenLDAP is successfully installed, you need to go a step further and reconfigure the main package. This is the slapd package. To accomplish this, run the following command.

# dpkg-reconfigure slapd

The command will generate a series of prompts on your terminal. First, you need to initialize the OpenLDAP server configuration. On the first prompt, Select the ‘No’ option to prevent the omittance of the OpenLDAP server configuration.

Choose-No-Cofiguring-slapd-ubuntu

Next, provide a DNS domain name. This will be used to construct the base DN of the LDAP directory. In this example, we will use the domain name linuxtechi.com. As such, the DN will be represented as “dc=linuxtechi,dc=com”. The hit ‘ENTER

DNS-Domain-Name-Ldap-Server-Ubuntu

Next, provide a name for your organization that will also form part of the base DN. Once again, we will provide the same name as the domain name.

Organization-Name-Ldap-Server-Ubuntu

Next, provide the Administrator password for your LDAP directory and hit ‘ENTER’.

Enter-Administrator-Password-Ldap-Ubuntu

Be sure to confirm it and press ‘ENTER

Re-Enter-Administrator-Password-Ldap-Ubuntu

When prompted to remove the database when slapd is purged, select ‘NO’.

Choose-No-Skip-Database-Removal-Slapd-Ubuntu

Finally, select ‘Yes’ to remove the old database to create room for a new database.

Select-Yes-Move-Old-Databases-Slapd-Ubuntu

Finally, you should see the following output.

Dpkg-Reconfigure-Slapd-Ubuntu

Next, you need to make changes to the main OpenLDAP configuration file So open it using your preferred editor. Here we are using nano.

$ sudo nano /etc/ldap/ldap.conf

Locate and uncomment the lines beginning with “BASE” and “URI” and provide the domain name for your OpenLDAP server. In our case, the “BASE” is “dc=linuxtechi,dc=com” and the “URI” for the OpenLDAP server is “ldap://ldap.linuxtechi.com“.

BASE dc=linuxtechi,dc=com
URI  ldap://ldap.linuxtechi.com

Save the changes and exit the configuration file. Then restart slapd daemon and check its status as follows.

$ systemctl restart slapd
$ systemctl status slapd

Restart-Slapd-Service-Ubuntu

Then run the following command to confirm the OpenLDAP basic configuration. This should give you the following output.

# ldapsearch -Q -LLL -Y EXTERNAL -H ldapi:///

Ldapsearch-External-Ubuntu

4) Setup Base group for OpenLDAP Users

The next step is to create a new base group for OpenLDAP users. To demonstrate this, we will create two base groups: people and groups. The ‘people’ group will be used for storing regular users while the ‘groups’ group will store the groups on your LDAP server.

Therefore, we will create the base-groups file as follows.

# nano base-groups.ldif

Paste the following lines to the configuration file.

dn: ou=people,dc=linuxtechi,dc=com
objectClass: organizationalUnit
ou: people
dn: ou=groups,dc=linuxtechi,dc=com
objectClass: organizationalUnit
ou: groups

Save the changes and exit.

To add the base groups, run the ‘ldapadd ‘ command against the ‘base-groups.ldif’ file. Provide the OpenLDAP admin password when prompted and press ‘ENTER’.

# ldapadd -x -D cn=admin,dc=linuxtechi,dc=com -W -f base-groups.ldif

The output will display information informing you that the groups have successfully been added.

Ldapadd-base-groups-ubuntu

To confirm that the groups have been added, run the following command.

# ldapsearch -Q -LLL -Y EXTERNAL -H ldapi:///

The command generates a block of output displaying all the details of your LDAP configuration including the groups we have just created.

Ldapsearch-verify-new-groups-ubuntu

5) Add a new group to the Base Group

With the base groups already created, in this section, we will proceed to add a new group to the ‘groups’ base group.

To achieve this, we will create a new group file called group.ldif.

# nano group.ldif

Paste the following lines of code. Here, we have specified a new group called support_engineers with a group ID of 5000.

dn: cn=support_engineers,ou=groups,dc=linuxtechi,dc=com
objectClass: posixGroup
cn: support_engineers
gidNumber: 5000

Once done, save the changes and exit the configuration file. Then run the command below to add the ‘support_engineers’ group to the ‘groups’ group.

# ldapadd -x -D cn=admin,dc=linuxtechi,dc=com -W -f group.ldif

The command generates the following output confirming that the support_engineers group was successfully added.

Add-NewGroup-Ldapadd-Ubuntu

Then execute the following command to verify that the group ‘support_engineers’ is part of the ‘groups’ group with a GID of ‘5000’.

# ldapsearch -x -LLL -b dc=linuxtechi,dc=com '(cn=support_engineers)' gidNumber

Ldapsearch-check-new-group-ubuntu

5) Create a new OpenLDAP User

The last step is to create an OpenLDAP user and attach the user to a specific base group. But first, you need to generate an encrypted password for the user. To do so, run the following command, and be sure to provide a strong password.

# slappasswd

The password will be printed in an encrypted format. Copy the entire password beginning with {SSHA}to the last character and paste it somewhere as you will need this in the next step.

Slappasswd-Command-Output-Ubuntu

Next, create a new user file as shown.

# nano user.ldif

Paste the following lines of code. In this configuration, we are creating a new user called ‘Alex’ with a UID of 7000. The default home directory will be “/home/alex” and the default login shell “/bin/bash”. The new user will be a part of the base group called ‘people’ with a GID of 7000.

Paste the encrypted user’s password in the userPassword attribute or parameter.

dn: uid=alex,ou=people,dc=linuxtechi,dc=com
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: alex
sn: smith
givenName: alex
cn: alex smith
displayName: alex smith
uidNumber: 7000
gidNumber: 7000
userPassword: {SSHA}uQVjd8MLaJ7AXEd/grqViuKnk9tNojdy
gecos: Alex Smith
loginShell: /bin/bash
homeDirectory: /home/alex

Save and exit the configuration file

To add the user to the ‘people’ group, run the following command:

# ldapadd -x -D cn=admin,dc=linuxtechi,dc=com -W -f user.ldif

You should get the following confirmation output.

Add-User-Group-Ldapadd-Command-Ubuntu

To confirm the creation of the user, execute the command.

# ldapsearch -x -LLL -b dc=linuxtechi,dc=com '(uid=alex)' cn uidNumber gidNumber

This prints out all the user details including the canonical name of the user, UID, and GID.

Search-User-ldapsearch-Ubuntu

Conclusion

In this guide, we have successfully installed and configured the OpenLDAP server on Ubuntu 22.04. We have gone a notch higher and created base groups, groups, and users and added them to the base groups. That’s all for this guide, your feedback is welcome.

Also Read: How to Install GitLab on Ubuntu 22.04 | 20.04

The post How to Setup OpenLDAP Server on Ubuntu 22.04 first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/setup-openldap-server-on-ubuntu/feed/ 0
How to Install Apache Kafka on Ubuntu 22.04 https://www.linuxtechi.com/how-to-install-apache-kafka-on-ubuntu/ https://www.linuxtechi.com/how-to-install-apache-kafka-on-ubuntu/#comments Mon, 24 Jul 2023 06:58:56 +0000 https://www.linuxtechi.com/?p=15906 In this guide, we will demonstrate how to install Apache Kafka on Ubuntu 22.04 step-by-step. In Big data, enormous streams of data records are generated by millions of data sources which include social media platforms, enterprise systems, mobile apps, and IoT devices to mention a ... Read more

The post How to Install Apache Kafka on Ubuntu 22.04 first appeared on LinuxTechi.]]>
In this guide, we will demonstrate how to install Apache Kafka on Ubuntu 22.04 step-by-step.

In Big data, enormous streams of data records are generated by millions of data sources which include social media platforms, enterprise systems, mobile apps, and IoT devices to mention a few. The main challenges that arise with such an enormous amount of data are two-fold; efficient collection and analysis of the data. To overcome such challenges, you need a reliable and robust messaging system.

Developed by Apache Foundation and written in Java and Scala, Apache Kafka is an open-source distributed pub/sub ( publish-subscribe) event streaming platform that handles enormous amounts of data. It also allows you to transmit messages from one point to another. It works alongside the Zookeeper synchronization service and seamlessly integrates with Apache Spark for analytics and large-scale data processing.

In comparison to other messaging systems, Apache Kafka provides better throughput, inherent fault-tolerance, and replication which makes it an excellent choice for enterprise message processing applications. Top companies using Apache Kafka in their tech stacks include Netflix, Microsoft, and AirBnB.

Prerequisites

  • Pre Installed Ubuntu 22.04
  • Sudo User with Admin Rights
  • Internet Connectivity

1) Install OpenJDK on Ubuntu 22.04

Since Apache Kafka is written in Java, Installation of Java is a prerequisite. So, log in to your server and refresh the local package index.

$ sudo apt update

Next, install OpenJDK which is a free and open-source implementation of the Java Standard Edition Platform. Here, we are installing OpenJDK 11 which is an LTS release.

$ sudo apt install openjdk-11-jdk -y

Install-Openjdk11-for-kafka-Ubuntu

Once installed, verify the version of Java as shown.

$ java -version

Check-Java-Version-Kafka-Ubuntu

2) Install Apache Kafka on Ubuntu 22.04

With all the prerequisites already in place, let’s proceed and install Apache Kafka. To do so, head over to Apache Kafka main download page and locate the latest binary file in tarball format. At the time of writing down this guide, Apache Kafka 3.5.1 is the latest release.

To download it, use the wget command-line utility.

$ wget https://downloads.apache.org/kafka/3.5.1/kafka_2.13-3.5.1.tgz

Download-Apache-Kafka-Wget-Command-Ubuntu

Next, extract the tarball file using the tar command-line tool.

$ tar xvf kafka_2.13-3.5.1.tgz

Once extracted, a folder called kafka_2.12-3.5.0 is created. Next, move this folder to the /usr/local directory and rename it kafka.

$ sudo mv kafka_2.13-3.5.1 /usr/local/kafka

Move-Kafka-Binary-user-local-ubuntu

3) Create Kafka and ZooKeeper Systemd Unit files

In this step, we will create systemd unit files for Kafka and ZooKeeper services. This will allow you to easily manage the services using the systemctl command.

Let’s start by creating the Zookeeper systemd file using the nano editor as shown.

$ sudo nano  /etc/systemd/system/zookeeper.service

Paste the following lines of code which define Zookeeper’s systemd service.

[Unit]
Description=Apache Zookeeper server
Documentation=http://zookeeper.apache.org
Requires=network.target remote-fs.target
After=network.target remote-fs.target

[Service]
Type=simple
ExecStart=/usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties
ExecStop=/usr/local/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multi-user.target

Save the changes and exit.

Next, create Kafka’s systemd file

$ sudo nano /etc/systemd/system/kafka.service

Paste the following lines of code which define Kafka’s systemd service.

[Unit]
Description=Apache Kafka Server
Documentation=http://kafka.apache.org/documentation.html
Requires=zookeeper.service

[Service]
Type=simple
Environment="JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64"
ExecStart=/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties
ExecStop=/usr/local/kafka/bin/kafka-server-stop.sh

[Install]
WantedBy=multi-user.target

Be sure to save the changes and exit.

4) Start Kafka and ZooKeeper Systemd Services

Once all systemd files are in place, notify systemd of the changes made.

$ sudo systemctl daemon-reload

Next, start Kafka and Zookeeper services.

$ sudo systemctl start zookeeper
$ sudo systemctl start kafka

Confirm if the services are running. For Zookeeper, run:

$ sudo systemctl status zookeeper

Zookeeper-Service-Status-Ubuntu

For Apache Kafka service , execute:

$ sudo systemctl status kafka

Kafka-Service-Status-Ubuntu

5) Create a Kafka Topic

With Kafka and all components installed, we will create a topic and try to send a message. In Kafka, a topic is a fundamental unit used to organize messages. Each topic should have a unique name across a cluster. Topics allow users to send and read data between Kafka servers.

You can create as many clusters as you want in Kafka. That said, Now let’s create a Topic called sampleTopic on localhost port 9092 with a single replication factor.

$ cd /usr/local/kafka
$ bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic sampleTopic

Let us break down the command:

  • –create: Creates a new topic
  • –replication-factor: Specifies how many copies of data will be created
  • –partitions: Specifies the number of partitions as the number of brokers by which your data will be split
  • –topic: Specifies the name of the topic. Topics are split into several partitions.

Upon running the command, you will get a confirmation that the topic was successfully created.

Output

Created topic sampleTopic

Creating-Topic-Kafka-Server-Ubuntu

As mentioned earlier, you can create as many topics as you can using the same syntax. To check or list the topics created, run the command:

$ bin/kafka-topics.sh --list --bootstrap-server localhost:9092

List-Kafka-Topic-Ubuntu

6) Send and Receive a Message in Kafka

In Kafka, a ‘producer’ is an application that writes data into topics across different partitions. Applications integrate a Kafka client library to write a message to Apache Kafka. Kafka client libraries are diverse and exist for a myriad of programming languages including Java, Python among others.

Let us now run the producer and generate a few messages on the console.

$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic sampleTopic

You will be prompted to type a few messages. In this case, we typed a few lines.

> Hello World!
> Welcome to Apache Kafka
> This is the first topic

Once you are done, you can exit or keep the terminal running. To consume the messages, open a new terminal and run the following command:

$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic sampleTopic --from-beginning

The messages you typed will be displayed on the terminal as shown in the output below.

Kafka-Consumer-Command-Ubuntu

Conclusion

This wraps up our guide today. In this tutorial, we have demonstrated how to install Apache Kafka on Ubuntu 22.04. In addition, we have seen how you can send and receive a message within a Kafka cluster.

The post How to Install Apache Kafka on Ubuntu 22.04 first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/how-to-install-apache-kafka-on-ubuntu/feed/ 3
How to Install PHP 8 on RHEL 9 | Rocky Linux 9 | AlmaLinux 9 https://www.linuxtechi.com/install-php-on-rhel-rockylinux-almalinux/ https://www.linuxtechi.com/install-php-on-rhel-rockylinux-almalinux/#comments Tue, 11 Jul 2023 08:45:47 +0000 https://www.linuxtechi.com/?p=15824 PHP is a recursive acronym for Hypertext Preprocessor. It is a popular and widely used server-side scripting language used in web development and can even be embedded in HTML pages. Although there are other new scripting languages such as Perl and Ruby, PHP still remains ... Read more

The post How to Install PHP 8 on RHEL 9 | Rocky Linux 9 | AlmaLinux 9 first appeared on LinuxTechi.]]>
PHP is a recursive acronym for Hypertext Preprocessor. It is a popular and widely used server-side scripting language used in web development and can even be embedded in HTML pages. Although there are other new scripting languages such as Perl and Ruby, PHP still remains popular and powers some of the most popular websites such as Facebook, WordPress, and MailChimp.

Prerequisites

  • Pre-installed RHEL 9 or Rocky Linux 9 or Alma Linux 9
  • Sudo user with admin rights
  • Internet Connectivity

In this post, we look at how to install PHP 8.2 on RHEL 9 / Rocky Linux 9 / AlmaLinux OS 9.

Step 1) Update System Packages

To get started, log into your server instance and be sure to update installed packages to their latest versions as shown below.

$ sudo dnf update -y

dnf-update-for-php8-installation

To ensure that all changes have come into effect, restart your server instance.

$ sudo reboot

Step 2) Add the Remi repository

In this tutorial, we will install the latest version of PHP which is PHP 8.2 from the Remi repository. This is a third-party repository that provides the latest versions of PHP and other packages for RHEL-based Linux distributions.

To install Remi, first, you need to install EPEL as a prerequisite. EPEL, short for Extra Packages for Enterprise Linux, is a repository maintained by Fedora which contains additional packages for Red Hat Enterprise Linux (RHEL), and RHEL-based distributions such as Alma Linux and Rocky Linux.

Therefore, install EPEL as shown.

$ sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm -y

Add-EPEL-Repo-RHEL9-RockyLinux9-AlmaLinux9

Once done, confirm that the EPEL repository has been installed.

$ rpm -q epel-release

The next step is to add the Remi repository, and to do so, run the command:

$ sudo dnf -y install http://rpms.remirepo.net/enterprise/remi-release-8.rpm -y

Add-Remi-Repo-RHEL9-RockyLinux9-AlmaLinux9

You can verify if Remi is installed using the command:

$ rpm -q remi-release

Additionally, you can list configured repositories on your system as follows. Be sure that you can see EPEL and Remi repository entries.

DNF-Repolist-EPEL-Remi-RHEL9-RockyLinux9

Step 3) Install PHP 8.2 on RHEL 9 / Rocky 9 / AlmaLinux 9

With both repositories installed, the next step is to install PHP 8.2.  Before proceeding, reset the default PHP module.

$ sudo dnf module reset php -y

Reset-PHP-Module-RHEL9

Next, enable the PHP Remi 8.2 module that contains the PHP packages to be installed.

$ sudo dnf module install php:remi-8.2

Enable-php-remi-module-rhel9-rockylinux9

When prompted, hit ‘Y’ and press ENTER to import the GPG key.

Press-Y-Import-GPG-Keys-RHEL9

Next, install PHP 8.2 and dependency packages

$ sudo dnf -y install php

Once installed, confirm the PHP version as shown

$ php --version
OR
$ php -v

From the output, you can see that we have installed PHP 8.2.7.

Check-PHP-Version-RHEL9-RockyLinux9

Additionally, you can install PHP modules to extend the functionality of PHP. This takes the following syntax:

$ sudo dnf install php-extension_name

If you are installing multiple PHP extensions, you can use the shortened form as shown.

$ sudo dnf install php-{extension1,extension2}

In this example, we are installing a set of PHP extensions as shown.

$ sudo dnf install php-{zip,json,pear,mysqlnd, xml,fpm,curl,opcache,intl,cgi}

When prompted, press ‘y’ and hit ENTER to proceed.

Installing-php-extensions-rhel9-dnf-command

To view all installed modules, run the command:

$ php -modules

View-Installed-PHP-Modules-RHEL9

Integrating PHP with Apache Web Server

The mod_php module is an Apache module that enables Apache to parse php code and interpret PHP files. When a user requests a PHP page, the web server fetches the requested PHP file, executes the code, and returns HTML content to the client’s web browser. This eliminates the need of having a separate PHP interpreter.

The Apache web server comes with the mod_php module as one of the packages.  So you can go ahead and install Apache as shown.

$ sudo dnf install httpd -y

Next, enable the Apache web server service.

$ sudo systemctl enable httpd
$ sudo systemctl start httpd

Be sure to confirm that Apache is running.

$ sudo systemctl status httpd

httpd-service-status-rockylinux9

Next, create a sample PHP file in the web root directory.

$ sudo nano /var/www/html/info.php

Add the following code

<?php
phpinfo()
?>

Save the changes and exit the file. To confirm that the web server is ready to serve PHP pages, browse your URL as shown.

http://server-ip/info.php

This displays what you can see below, a confirmation that your web server is able to parse PHP code.

PHP-Info-WebPage-Httpd-RHEL9

Integrating PHP with Nginx Web Server

When running the Nginx web server, PHP-FPM (FastCGI Process Manager) is the recommended way of processing PHP pages. It’s faster than mod_php and other traditional CGI-based avenues. It consumes less computing resources such as CPU and memory compared with other methods. In addition, it allows you to run PHP as a daemon, which can be managed using systemd.

To demonstrate how to use PHP with Nginx, we will install Nginx alongside PHP-FPM as shown.

$ sudo dnf install nginx php-fpm -y

Install-nginx-php-fpm-rhel9

If you have Apache installed, be sure to stop the service since t runs on port 80 which Nginx will use.

$ sudo systemctl stop httpd

Next, enable and start Nginx and PHP-FPM.

$ sudo systemctl enable --now nginx php-fpm

Enable-Start-Nginx-php-fpm-rhel-rockylinux

Be sure to confirm that Nginx and PHP-FPM services are running.

$ sudo systemctl status nginx

Nginx-Service-Status-RHEL9-RockyLinux9

$ sudo systemctl status php-fpm

Php-fpm-Service-Status-RHEL9-RockyLinux9

Next, edit the www.conf file as shown:

$ sudo vim /etc/php-fpm.d/www.conf

Add the following lines.

user = nginx
group = nginx
listen = /var/run/php-fpm.sock
listen.owner = nginx
listen.group = nginx
listen.mode = 0660

Save and exit the configuration file. Next, we need to configure Nginx to forward requests to PHP-FPM. To accomplish this, access the Nginx main configuration file.

$ sudo vim /etc/nginx/nginx.conf

Add the following line under the root directive.

index index.php index.html;

Next, add the following lines to forward requests to PHP-FPM

location ~ \.php$ {
        include /etc/nginx/fastcgi_params;
        fastcgi_pass unix:/var/run/php-fpm.sock;
    }

Nginx-php-fpm-integration-RHEL9-RockyLinux9

Save the changes and exit the configuration file. To apply the changes made, restart Nginx and PHP-FPM.

$ sudo systemctl restart nginx
$ sudo systemctl restart php-fpm

Next, verify if the Nginx configuration is sound

$ sudo nginx -t

Check-Nginx-Syntax-RHEL-RokyLinux

Next, we will create a sample info.php file

$ sudo nano /usr/share/nginx/html/info.php

Add the following code

<?php
phpinfo()
?>

Save the changes and exit the file. To confirm that Nginx is ready to serve PHP pages, once again, browse your URL as shown.

http://server-ip/info.php

Nginx-Info-PHP-Page-RHEL9-RockyLinux9

Conclusion

In this guide, we have shown you how to install PHP 8.2 on RHEL 9 and RHEL-9-based distros such as Rocky Linux 9 and AlmaLinux 9. We have also demonstrated how you can configure PHP to work with Apache and Nginx web browsers.

The post How to Install PHP 8 on RHEL 9 | Rocky Linux 9 | AlmaLinux 9 first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/install-php-on-rhel-rockylinux-almalinux/feed/ 1
How to Upgrade Debian 11 to Debian 12 (Bookworm) via CLI https://www.linuxtechi.com/upgrade-debian-11-to-debian-12/ https://www.linuxtechi.com/upgrade-debian-11-to-debian-12/#comments Sun, 11 Jun 2023 06:49:41 +0000 https://www.linuxtechi.com/?p=15706 Debian 12, code name Bookworm, has been released on June 10th, 2023. It ships with a myriad of new features and improvements over Debian 11. The latest release of Debian bundles over 11089 new packages, coming to a total of over 64419 packages. Over 67% ... Read more

The post How to Upgrade Debian 11 to Debian 12 (Bookworm) via CLI first appeared on LinuxTechi.]]>
Debian 12, code name Bookworm, has been released on June 10th, 2023. It ships with a myriad of new features and improvements over Debian 11.

The latest release of Debian bundles over 11089 new packages, coming to a total of over 64419 packages. Over 67% of the packages in Debian ( this is about 43254 packages)  have been updated, and a further 10% ( about  6,296 packages) that were in Bullseye have been removed and marked as obsolete.

In this post, we look at how to upgrade Debian 11 to Debian 12 via cli step by step. But before we do so, let’s check out some of the key highlights of Debian 12:

What’s new in Debian 12 ?

Here are some of the key highlights of Debian 12:

1.    Linux Kernel 6.1

Debian 12 is powered by Linux kernel 6.1 which is an LTS version that includes numerous enhancements such as improved support for AMD CPUs, experimental support for Rust, and ARM SoC support among many others.

2.    Updated Installer

A new installer is included with Debian 12. It has received several enhancements and other features.

3.    New Wallpaper

As with any other Linux distribution release, Debian 12 includes a new and artistic wallpaper known as ‘Emerald’. It’s quite elegant and polished just like the emerald gemstone from which its name is derived.

4.    Support for Pipewire

Debian 12 provides support for Pipewire out-of-the-box which replaces PulseAudio as the default audio and service.

5.    Newer Software Versions

The latest release of Debian provides the latest software releases which include:

  • MariaDB 10.11
  • PHP 8.2
  • Python 3.11.2
  • Nginx 1.22
  • OpenJDK 17
  • LibreOffice 7.4
  • GNOME 43
  • Perl 5.36
  • Vim 9.0
  • Samba 4.17

And many more.  Those are just a few highlights of what to expect in Debian 12. Check out the release notes for more information on what to expect.

Important: Before doing any upgrade activity, take the backup of your system either using timeshift or rsnapshot utility.

Let’s now see how to upgrade Debian 11 to Debian 12.

Step 1: Update the Local Package Index

To get started, it’s a good idea to refresh the package lists on your system as follows:

$ sudo apt update

Apt-Update-Lsb-Release-Before-Upgrade

Step 2: Install All Available Updates

Run the following apt commands one after the another to install all the available updates,

$ sudo apt upgrade
$ sudo apt full-upgrade
$ sudo apt --purge autoremove

Sudo-Apt-Upgrade-Debian-Cli

Post installing the updates, reboot the system,

$ sudo reboot

Note: Any package which is marked in hold status may cause issue while upgrading so it is recommended to unhold such package before proceeding upgrade.

$ sudo apt-mark showhold | more
$ sudo apt-mark unhold <pkg-name>

Step 3: Update the sources.list file

Since the mission is to upgrade from Debian 11 ( Bullseye) to Debian 12 ( Bookworm) you need to update the /etc/apt/sources.list file by substituting every instance of bullseye with bookworm.

Before making the changes, take the backup of debian 11 sources.list file using beneath cp command,

$ sudo cp -v /etc/apt/sources.list /opt/sources.list-bakup-debian11

Now run sed command to replace every instance of bullseye with bookworm in sources.list file.

$ sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list

Once again, update the package lists to apply the changes made to the package lists.

$ sudo apt update

Replace-Bullseye-with-bookworm-sources-list-deban

Step 4: Upgrade to Debian 12

Now upgrade the system by running the following command:

$ sudo apt full-upgrade

Debian-Apt-Full-Upgrade-Cli-Command

We will be prompted the following screen, press ‘q’ to proceed further.

Pres-q-to-Proceed-further-Debian-Upgrade

When prompted, Choose “Yes” and press ENTER to proceed. You will most likely run into a few prompts. For example, you might be asked whether to restart services. Choose any option that you deem fit by pressing ENTER.

Restart-Services-During-Debian-Upgrade

You will also get prompted to select the version of sshd_config file. Here, we have selected to keep the local version installed.

Keep-Local-Verison-SSHD-During-Debian-Upgrade

Similarly, you will get the same message about the GRUB configuration file.

Keep-Local-Grub-Version-During-Debian-Upgrade

Post upgrade, reboot the system once using following reboot command.

$ sudo reboot

Reboot-Post-Debian-Upgrade

Step 5: Verify Debian 12 After Upgrade

Once the system is back online after reboot, open the terminal and run the following command to check Debian version,

$ lsb_release -a

$ cat /etc/os-release

You can also confirm the kernel version as shown.

$ uname -rms

Verify-Debian-Upgrade-Cli-Commands

It is recommended to remove the outdated packages post upgrade using following apt command,

$ sudo apt --purge autoremove

Conclusion

This brings us to the end of this guide. In this tutorial, we showed you how to upgrade to Debian 12 (Bookworm) from Debian 11 (Bulleye).

The post How to Upgrade Debian 11 to Debian 12 (Bookworm) via CLI first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/upgrade-debian-11-to-debian-12/feed/ 2
How to Install Nagios on Rocky Linux 9 / Alma Linux 9 https://www.linuxtechi.com/how-to-install-nagios-on-rockylinux-almalinux/ https://www.linuxtechi.com/how-to-install-nagios-on-rockylinux-almalinux/#respond Thu, 01 Jun 2023 10:04:19 +0000 https://www.linuxtechi.com/?p=15666 Formerly known as Nagios, Nagios Core is a free, open-source, and powerful network monitoring tool that allows you to monitor servers and applications on your network. You can monitor both Linux and Windows servers as well as running applications. Nagios core enables the monitoring of ... Read more

The post How to Install Nagios on Rocky Linux 9 / Alma Linux 9 first appeared on LinuxTechi.]]>
Formerly known as Nagios, Nagios Core is a free, open-source, and powerful network monitoring tool that allows you to monitor servers and applications on your network. You can monitor both Linux and Windows servers as well as running applications.

Nagios core enables the monitoring of crucial metrics such as memory usage, swap space, disk usage, load average, and current running processes on a system.

In this guide, you will learn how to install Nagios Core on Rocky Linux 9 and Alma Linux 9 step-by-step.

Prerequisites

To get things running, you need to ensure that you have the following in place first.

  • An instance of Rocky Linux 9 / Alma Linux9 server with SSH access
  • A sudo user configured
  • Internet Connectivity

Step 1: Update packages 

To start off, access your server via SSH and upgrade all the packages to their latest versions.

$ sudo dnf update

Step 2: Install Apache and PHP

Nagios is PHP-driven and runs on the front end. Therefore, as part of the installation procedure, you need to install Apache web server and PHP.

Therefore, install Apache and enable it as shown.

$ sudo dnf install httpd
$ sudo systemctl enable httpd

Once installed, confirm that Apache is installed as shown.

$ httpd -V

httpd-version-Check-Linux

Be sure to allow port 80 on the firewall as shown.

$ sudo firewall-cmd --add-port=80/tcp --permanent
$ sudo firewall-cmd --reload

Next, we will install PHP from the Remi repository which provides the latest PHP versions. In this case, we will install PHP 8.2 which is the latest version of PHP.

In order to enable Remi repository, run the beneath command:

$ sudo dnf install dnf-utils http://rpms.remirepo.net/enterprise/remi-release-9.rpm

Next, enable the Remi repository as follows

$ sudo dnf module reset php -y
$ sudo dnf module enable php:remi-8.2 -y

Once enabled, install PHP and associated PHP modules

$ sudo dnf install php php-gd php-opcache php-curl -y

When the installation is complete, verify that PHP has successfully been installed.

$ php --version

php-version-check-command-linux

Then enable php-fpm as shown

$ sudo systemctl enable --now php-fpm

Step 3: Configure SELinux

Before you proceed further, ensure that you set SELinux to permissive mode as shown. This is crucial for you to access the Nagios web interface from the browser.

$ sudo sed -i 's/SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
$ sudo setenforce 0

Step 4: Download Nagios Core

The next step is to download and install Nagios core. Before you do so, install EPEL repository and some dependencies and development libraries as shown.

$ sudo dnf install epel-release
$ sudo dnf install gcc glibc glibc-common gd gd-devel make net-snmp openssl-devel unzip wget gettext autoconf net-snmp-utils postfix automake perl-Net-SNMP -y

Next, download the latest Nagios Core tarball file from the official downloads page using the wget command as follows. At the time of writing this guide, the latest version is Nagios 4.4.11.

$ wget https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.4.11.tar.gz

Once download, extract the tarball file.

$ tar -xvf nagios-4.4.11.tar.gz

Next, move the uncompressed Nagios folder to the /usr/src/nagios directory.

$ sudo mv nagios-4.4.11 /usr/src/nagios

Next, navigate to the Nagios directory.

$ cd /usr/src/nagios

Then compile Nagios using the following commands.

$ sudo ./configure

At the end of the command, you will get the following output.

Run-Configure-Nagios-Server-RockyLinux

Then compile the source code as shown. This installs the main program, HTML files, and CGIs.

$ sudo make all

Step 5: Create Nagios User and Group

Nagios server needs a user and group to run it. Therefore, run the following command,

$ sudo make install-groups-users

Make-Install-Groups-Users-Nagios-RockyLinux

Next, add the Apache user to the Nagios group.

$ sudo usermod -a -G nagios apache

Step 6: Install Nagios Core

Next, run the following command to install the init script in the /lib/systemd/system path.

$ sudo make install-init

Once that’s done, create an external command directory for Nagios to process commands from external applications.

$ sudo make install commandmode

Next, install sample configuration files.

$ sudo make install-config

Be sure to also install sample Apache configuration files.

$ sudo make install-webconf

For all the changes to come into effect, restart the Apache web server.

$ sudo systemctl restart httpd

Then initialize the systemd unit file as shown.

$ sudo make install-daemoninit

Step 7: Enable HTTP Authentication

In this step, we will configure HTTP authentication for Nagios Core. This ensures that only authorized personnel have access to the Nagios web interface.

To enable HTTP authentication, run the following command:

$ sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

You will be prompted to supply a new password and confirm it. This is the password for the admin user called nagiosadmin.

Create-nagiosadmin-User-RockyLinux

You can add another user by editing the /usr/local/nagios/etc/cgi.cfg file and replace all instances of “nagiosadmin” with your preferred username as follows:

authorized_for_read_only=username

Once you have done so, be sure to restart the Apache web server.

$ sudo systemctl restart httpd

Step 8: Install Nagios Core Plugins

The next step is to install Nagios plugins. These are standalone extensions that process command-line arguments and monitor just about anything in Nagios Core.

To proceed, head over to the Nagios Plugin downloads page and download the latest tarball file.

$ wget https://github.com/nagios-plugins/nagios-plugins/releases/download/release-2.4.4/nagios-plugins-2.4.4.tar.gz

Next extract the tarball file.

$ sudo tar -xvf nagios-plugins-2.4.4.tar.gz

Next, move the extracted folder to the /usr/src/ directory.

$ sudo mv nagios-plugins-2.4.4 /usr/src/nagios-plugins

Some dependencies are required by Nagios plugins to function as expected. Install them as follows.

$ sudo dnf install openssl-devel net-snmp-utils postgresql-devel openssh-clients lm_sensors perl-Net-SNMP openldap-devel bind-utils samba-client fping -y

To install the plugins, navigate to the /usr/src/nagios-plugins directory.

$ cd /usr/src/nagios-plugins

Then compile the source code as follows.

$ sudo ./configure
$ sudo make
$ sudo make install

Step 9: Install NRPE Plugin

The next step is to install the NRPE (Nagios Remote Host Plugin Executor) plugin. This is an agent used for communicating with remote hosts.

To install the NRPE plugin run the command

$ sudo dnf install nrpe

Confirm that the plugin has been installed

$ nrpe -V

Nrpe-Version-check-RockyLinux

Now enable and start the NRPE service

$ sudo systemctl enable nrpe --now

Then confirm if it is running.

$ sudo systemctl status nrpe

From the output, the service is up and running.

Nrpe-Service-Status-Nagios-RockyLinux

The NRPE service listens on port 5666. You need to allow this port across the firewall as shown.

$ sudo firewall-cmd --add-port=5666/tcp --permanent
$ sudo firewall-cmd --reload

Step 9: Add a Remote Host

With NRPE installed, it’s now time to configure a remote host. Create a configuration file as shown

$ sudo vi /usr/local/nagios/etc/objects/nodes.cfg

Add the following lines of code which define the templates to be used. Be sure  to adjust the values to match your environment.

# REMOTE HOST DEFINITION
define host {
    use                     remote-linux-server
    host_name               node1.linuxtechi.com
    alias                   node1
    address                 192.168.2.50
}

# REMOTE HOST GROUP DEFINITION
define hostgroup {
    hostgroup_name         remote-linux-servers
    alias                  remote-linux-servers
    members                node1.linuxtechi.com
}

define service {
    use                     local-service           
    host_name               node1.linuxtechi.com
    service_description     PING
    check_command           check_ping!100.0,20%!500.0,60%
}

define service {
    use                     local-service          
    host_name               node1.linuxtechi.com
    service_description     Root Partition

    check_command           check_local_disk!20%!10%!/

}

define service {
    use                     local-service           
    host_name               node1.linuxtechi.com
    service_description     Current Users
    check_command           check_local_users!20!50
}

define service {
    use                     local-service           
    host_name               node1.linuxtechi.com
    service_description     Total Processes
    check_command           check_local_procs!250!400!RSZDT
}

define service {
    use                     local-service           
    host_name               node1.linuxtechi.com
    service_description     Current Load
    check_command           check_local_load!5.0,4.0,3.0!10.0,6.0,4.0
}

define service {
    use                     local-service         
    host_name               node1.linuxtechi.com
    service_description     Swap Usage
    check_command           check_local_swap!20%!10%
}

define service {
    use                     local-service      
    host_name               node1.linuxtechi.com
    service_description     SSH
    check_command           check_ssh
    notifications_enabled   0
}

define service {
    use                     local-service           
    host_name               node1.linuxtechi.com
    service_description     HTTP
    check_command           check_http
    notifications_enabled   0
}

Save the changes and exit the configuration file. For Nagios to start using the configuration, edit the following configuration file.

$ sudo vi /usr/local/nagios/etc/nagios.cfg

Comment out the localhost configuration file and append the following line.

#cfg_file=/usr/local/nagios/etc/objects/localhost.cfg
cfg_file=/usr/local/nagios/etc/objects/nodes.cfg

Save the changes and exit the configuration file.

Step 10: Start and Enable Nagios Core daemon

Before you start the Nagios server, run the following commands:

$ sudo mkdir /usr/local/nagios/var/rw
$ sudo chown nagios.nagios /usr/local/nagios/var/rw

Now start the Nagios Core server and confirm its status.

$ sudo systemctl start nagios
$ sudo systemctl status nagios

Nagios-Service-Status-RokcyLinux-AlmaLinux

Step 11: Access Nagios Core Web UI

Finally, access the Nagios web interface by navigating to the following server IP address.

http://server-ip/nagios

Authenticate with the login details you configured in Step 7.

Access-Nagios-Core-Web-UI

Once logged in, you should be able to see the Nagios Core welcome page as shown.

Nagios-Core-WebUI-Dashboard-RockyLinux-AlmaLinux

Conclusion

And that’s it for this guide. We have demonstrated how to install Nagios Core on Rocky Linux 9 / AlmaLinux 9. Your feedback and comments are welcome

The post How to Install Nagios on Rocky Linux 9 / Alma Linux 9 first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/how-to-install-nagios-on-rockylinux-almalinux/feed/ 0
How to Set Proxy Settings for APT Command https://www.linuxtechi.com/set-proxy-settings-for-apt-command/ https://www.linuxtechi.com/set-proxy-settings-for-apt-command/#respond Fri, 21 Apr 2023 16:19:36 +0000 https://www.linuxtechi.com/?p=15593 In this guide, you will learn how to set proxy settings for the APT command in Ubuntu/Debian Linux distributions. A proxy server is an intermediary server that sits between a client system or end user requesting the resource and the resource itself. In most cases, ... Read more

The post How to Set Proxy Settings for APT Command first appeared on LinuxTechi.]]>
In this guide, you will learn how to set proxy settings for the APT command in Ubuntu/Debian Linux distributions.

A proxy server is an intermediary server that sits between a client system or end user requesting the resource and the resource itself. In most cases, a proxy server acts as a gateway between end users and the internet.

For organizations and enterprise environments, a proxy server provides a number of benefits. It controls internet usage by blocking sites that are deemed to impact employees’ productivity. It also enhances privacy and improves the organization’s security through data encryption.

There are several ways to set proxy settings for apt command, so let’s jump right in.

Note: For demonstration, we will use Ubuntu 22.04.

Configure Proxy Setting For APT Using A Proxy file

The easiest way to configure proxy settings for the APT command is by creating a proxy.conf file as shown.

$ sudo vi /etc/apt/apt.conf.d/proxy.conf

For a proxy server without a username and password, add the following entries as shown

For the HTTP Proxy, add the following entry:

Acquire::http::Proxy "http://proxy-IP-address:proxyport/";

Do the same for the  HTTPS Proxy:

Acquire::https::Proxy "http://proxy-IP-address:proxyport/";

Example:

$ cat  /etc/apt/apt.conf.d/proxy.conf
Acquire::http::Proxy "http://192.168.56.102:3128/";
Acquire::https::Proxy "http://192.168.56.102:3128/";

If your Proxy server requires a username and password details, add them as follows

Acquire::http::Proxy "http://username:password@proxy-IP-address:proxyport";
Acquire::https::Proxy "http://username:password@proxy-IP-address:proxyport";

Example:

$ cat  /etc/apt/apt.conf.d/proxy.conf
Acquire::http::Proxy "http://init@PassW0rd321#@192.168.56.102:3128/";
Acquire::https::Proxy "http://init@PassW0rd321#@192.168.56.102:3128/";

Once you are done, Save the changes and exit the configuration file. The Proxy settings will come into action the next time you run the APT package manager.

For example, you can update the local package index and then install net-tools package

$ sudo apt update
$ sudo apt install net-tools -y

Apt-Install-Package-Proxy-Settings

Verify the proxy server logs to confirm that apt command is using proxy server for downloading packages. On the proxy server run,

# tail -f /var/log/squid/access.log  | grep -i 192.168.56.240

Here ‘192.168.56.240’ is the IP address of our Ubuntu machine,

Grep-Proxy-Logs-Ubuntu-Machine

Perfect, output above confirms that apt command of our ubuntu system is downloading the packages via proxy server (192.168.56.102)

An Alternative Way of Specifying Proxy Details

Apart from the first approach, you can specify the proxy details in a much simpler way. Once again, create a proxy.conf file as follows.

$ sudo vi /etc/apt/apt.conf.d/proxy.conf

For a Proxy server without a username and password, define it as shown.

Acquire {
  http::Proxy "http://proxy-IP-address:proxyport/";
  https::Proxy "http://proxy-IP-address:proxyport/";
}

Sample example file would look like below,

$ sudo vi /etc/apt/apt.conf.d/proxy.conf

Proxy-conf-Apt-Command-Ubuntu

For a Proxy server with username and login details:

 Acquire {
   http::Proxy "http://username:password@proxy-IP-address:proxyport/";
   https::Proxy "http://username:password@proxy-IP-address:proxyport/";
}

Save the changes and exit the configuration file. Just a reminder that these settings take immediate effect once you start using the APT package manager.

Conclusion

This concludes this guide. In this tutorial, we have demonstrated how you can configure proxy settings for the APT package manager which is used in Debian/Ubuntu Linux distributions. That’s all for now. Keep it Linuxechi!

Also Read: How to Install Go (Golang) on Ubuntu Linux Step-by-Step

The post How to Set Proxy Settings for APT Command first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/set-proxy-settings-for-apt-command/feed/ 0
How to Install CRI-O (Container Runtime) on Ubuntu 22.04 https://www.linuxtechi.com/install-crio-container-runtime-on-ubuntu/ https://www.linuxtechi.com/install-crio-container-runtime-on-ubuntu/#comments Tue, 21 Mar 2023 03:33:38 +0000 https://www.linuxtechi.com/?p=15499 CRI-O is an opensource and lightweight container runtime for Kubernetes. It is an implementation of the Kubernetes Container Runtime Interface (CRI) using Open Container Initiative (OCI) compatible runtimes. It’s a perfect alternative to Docker when running Kubernetes. In this guide, we will demonstrate how to ... Read more

The post How to Install CRI-O (Container Runtime) on Ubuntu 22.04 first appeared on LinuxTechi.]]>
CRI-O is an opensource and lightweight container runtime for Kubernetes. It is an implementation of the Kubernetes Container Runtime Interface (CRI) using Open Container Initiative (OCI) compatible runtimes. It’s a perfect alternative to Docker when running Kubernetes.

In this guide, we will demonstrate how to install CRI-O on Ubuntu 22.04 LTS step by step.

Prerequisites

Before you start out, here is what you need:

  • An instance of Ubuntu 22.04 with SSH access
  • A sudo user configured on the instance
  • Fast and stable internet connectivity

With that out of the way, let us get started out.

Step 1: Update the system and Install dependencies

Right off the bat, log into your server instance and update the package lists as follows.

$ sudo apt update

Once the local package index has been updated, install the dependencies as follows.

$ sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common -y

Step 2: Add CRI-O repository

To install CRI-O, we need to add or enable its repository on Ubuntu. But first, you need to define the variables based on the operating systems and the CRI-O version that you want to install.

As such, define the variables as shown below.

$ export OS=xUbuntu_22.04
$ export CRIO_VERSION=1.24

Once that is done, run the following set of commands to add the CRI-O Kubic repository.

$ echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /"| sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
$ echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list

Set-Crio-Repository-Ubuntu-Linux

Thereafter, import the GPG key for the CRI-O repository

$ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS/Release.key | sudo apt-key add -
$ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key add -

This yields the following output as shown below.

Import-GPG-Keys-for-Crio-Repository-Ubuntu-Linux

Once again update the package index to synchronize the system with the newly added CRI-O Kubic repositories.

$ sudo apt update

Step 3: Install CRI-O On Ubuntu 22.04

With the repositories added, install CRI-O and the runtime client using the APT package manager.

$ sudo apt install cri-o cri-o-runc -y

Apt-Install-Crio-Ubuntu-Linux

Once installed, start and enable the CRI-O daemon.

$ sudo systemctl start crio
$ sudo systemctl enable crio

Next, verify if the CRI-O service is running:

$ sudo systemctl status crio

You should get the following output which shows that the CRI-O service is running as expected.

Start-Enable-Crio-Service-Ubuntu

Step 4: Install CNI Plugins For CRI-O

Next, you need to install the CNI (Container Network Interface) as well as the CNI plugins. Keep in mind that the loopback and bridge configurations are enabled and sufficient for running pods using CRI-O.

Therefore, to install the CNI plugins, run the following command.

$ sudo apt install containernetworking-plugins -y

Once installed, edit the CRI-O configuration file

$ sudo nano /etc/crio/crio.conf

Uncomment network_dir & plugin_dirs section and also add ‘/usr/lib/cni/’ under plugin_dirs section.

Crio-Conf-Network-Plugins-Directory-Ubuntu

Save the changes and exit the configuration file.

Next, restart the CRIO service.

$ sudo systemctl restart crio

Step 5: Install CRI-O tools

In addition, you also need to install the cri-tools package which provides the crictl command-line utility which is used for interacting and managing containers and pods.

To do so, run the command:

$ sudo apt install -y cri-tools

Once installed, confirm the version of crictl and RunTimeVersion as follows.

$ sudo crictl --runtime-endpoint unix:///var/run/crio/crio.sock version

Crictl-Crio-Version-Check-Ubuntu-Linux

Be sure to also check if CRI-O is ready to deploy pods using the following command:

$ sudo crictl info

Crictl-Info-Command-Ubuntu

The crictl command provides an autocompletion feature that lets you autocomplete commands by pressing the TAB key. To enable command completion run the following command.

$ sudo su -

# crictl completion > /etc/bash_completion.d/crictl

Then reload the current bash session.

# source ~/.bashrc

Enable-Bash-Completion-Crictl-Command-Ubuntu

To use the auto-completion feature, you will need to log out or start a new terminal session. Then simply type the crictl command and press the TAB key to view all the options.

$ crictl

Crictl-Command-Options-Ubuntu

Step 6: Create a Pod using crictl utility

Up to this point, CRI-O is fully installed and configured and ready to spin up a pod. In this section, we will create an Apache web server inside a pod and confirm if it is serving requests.

First, we are going to set up a pod sandbox or an isolated environment using a pod configuration file as follows.

$ sudo nano apache_sandbox.json

We will then add the following configuration to the file.

{
    "metadata": {
        "name": "apache-sandbox",
        "namespace": "default",
        "attempt": 1,
        "uid": "hdishd83djaidwnduwk28bcsb"
    },
    "linux": {
    },
    "log_directory": "/tmp"
}

Save and exit. Next create the pod using the following command. This prints out  long alphanumeric number which is the pod ID.

$ sudo crictl runp apache_sandbox.json

To confirm that the pod has been created, run the command.

$ sudo crictl pods

Crictl-Pods-Status-Ubuntu

To retrieve more information about the created pod, run the command:

$ sudo crictl inspectp --output table 05ba2f0704f22

This prints out the ID, Name, UID, Namespace, date of creation, internal pod IP among other details.

Crictl-Inspect-Pod-Ubuntu-Linux

Step 7: Create a container inside a pod

In section we are going to create an Apache web server container inside the pod. So, use the crictl utility to pull an Apache web server image from Docker Hub.

$ sudo crictl pull httpd

You can verify the image pulled as shown.

$ sudo crictl images

Crictl-pull-image-ubuntu

Next, we are going to define a container configuration file for the Apache web server.

$ sudo nano container_apache.json

Copy and paste the following code.

{
  "metadata": {
      "name": "apache"
    },
  "image":{
      "image": "httpd"
    },
  "log_path":"apache.0.log",
  "linux": {
  }
}

Save and exit the configuration file.

Finally, to attach the container to the sand box pod created earlier, run the command:

$ sudo crictl create 05ba2f0704f22 container_apache.json apache_sandbox.json

This outputs a large alphanumeric ID to the terminal.. Take note of this ID.

Finally, use the ID to start the Apache web server container as follows.

$ sudo crictl start 37f4d26510965452aa918f04d629f5332a1cd398d4912298c796942e22f964a7

Create-Container-Inside-Pod-Ubuntu-Linux

To check the container status, run the command:

$ sudo crictl ps

Pods-Status-Crictl-Command-Ubuntu

To verify that the Apache web server is running, send a HTTP request to the web server using the curl command and the pod’s internal ID.

$ curl -I 10.85.0.2

The following output confirms that the web server is running.

Curl-Command-Httpd-Pod-Ubuntu

Conclusion

That’s all from this, guide. We have successfully installed CRI-O on Ubuntu 22.04 and gone ahead to create a pod and container. Your comments and feedback are welcome.

Also Read: How to Install Docker on Ubuntu 22.04 / 20.04 LTS

The post How to Install CRI-O (Container Runtime) on Ubuntu 22.04 first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/install-crio-container-runtime-on-ubuntu/feed/ 1
How to Install Go (Golang) on Ubuntu Linux Step-by-Step https://www.linuxtechi.com/install-go-golang-on-ubuntu-linux/ https://www.linuxtechi.com/install-go-golang-on-ubuntu-linux/#respond Sat, 25 Feb 2023 02:37:18 +0000 https://www.linuxtechi.com/?p=15419 In this guide, we are going to cover how you can install Golang Go on Ubuntu Linux step-by-step. For demonstration purposes, we will use Ubuntu 22.04 LTS as our Linux environment. Go is also referred to as Golang, it is a free and open-source programming ... Read more

The post How to Install Go (Golang) on Ubuntu Linux Step-by-Step first appeared on LinuxTechi.]]>
In this guide, we are going to cover how you can install Golang Go on Ubuntu Linux step-by-step. For demonstration purposes, we will use Ubuntu 22.04 LTS as our Linux environment.

Go is also referred to as Golang, it is a free and open-source programming language designed at Google by Robert Griesemer, Rob Pike, and Ken Thompson. It is an apple to many a developer’s eye due to its simplicity, efficiency, and concurrency. Its concurrent nature implies means its ability to run multiple tasks at the same time.

It is mostly used for backend purposes such as server-side programming. It features majorly in the game and cloud-native development,  development of command-line tools, data science, and many more use cases.

There are three main ways of installing Go in Linux:

  1. Installing from the Official Binary package
  2. Installing using APT package manager ( Debian & Ubuntu distros)
  3. Installing from Snap packages

Let us go through each of these installation methods.

Install Go from the Official Binary Package

This is the most preferred installation method as it provides the latest version of Golang Go and cuts across all Linux distributions. To achieve this, follow the steps outlined.

Step 1:  Download the Go

The first step in installing Go is to update the system. So log in to your server and update the local package index as shown.

$ sudo apt update

Step 2:  Download the Go Binary Package

The next step is to download the installation file which is a tarball file. To download it, head over to the official Go download page and download the 64-bit tarball installation file ( amd64.tar.gz )

On the command line, you can download the latest tarball file using the wget command. At the time of writing this guide, the latest version of Go is Go 1.20.1. This is likely to change by the time you are reading this guide, so ensure to replace the version number accordingly.

$ wget https://go.dev/dl/go1.20.1.linux-amd64.tar.gz

Download-go-wget-command-linux

Step 3:  Unzip the tarball file & move it to /usr/local directory

Once the download is complete, unzip the tarball to the /usr/local/ directory which is the primary directory for executable files.

$ sudo tar -C /usr/local -xzf go1.20.1.linux-amd64.tar.gz

The -C option uncompresses the contents of the tarball file to the /usr/local directory inside the go folder. To confirm this, list the contents of the /usr/local/  directory using the ls command.

$ ls /usr/local/go

list-user-local-go-ubuntu-linux

Step 4: Add the Golang binary to the $PATH environment variable

The next step is to add Go to the $PATH environment variable.

Open up your .bashrc or .bash_profile file.

$ nano ~/.bash_profile

Paste the following line.

export PATH=$PATH:/usr/local/go/bin

Save the changes and exit the file.

Add-Go-Command-Path-Linux

Next, reload the  .bashrc or .bash_profile file using the source command as shown.

$ source ~/.bash_profile

Now Go is successfully installed.

Step 5: Verify Go Version

To check if Go is installed and its version, run the command:

$ go version

The following output shows that go has successfully been installed.

Go-Command-Version-Check-Linux

Install Go from the APT Package Manager

If you are running Debian / Ubuntu or any of their derivatives and don’t mind not installing the latest version of Go, then using the APT package manager will do just fine.

To get started, update the package lists as shown.

$ sudo apt update

If you are curious enough, you can search for the availability of the golang-go package from Debian / Ubuntu repositories as shown.

$ apt search golang-go

Apt-Search-Golang-go-Ubuntu

To install Golang Go using apt command, run

$ sudo apt install golang-go

Install-golang-go-apt-command-ubuntu-linux

Once the installation is complete, verify that Go is installed as shown.

$ go version

From the output, you can see that Go version 1.18.1 has been installed. Notably, this is not the latest version at the time of writing this guide.

Go-Version-Chech-Post-Installation-Apt

Install Go from Snap Package

Installing Go from snap is as easy as they come. First, you need to ensure that snap is already enabled on your system. Next, install Go from snap as follows.

$ sudo snap install go --classic

Once the installation is complete, verify that Go has successfully been installed as shown

$ /snap/bin/go version

Install-go-with-snap-ubuntu-linux

Testing Go Installation

In this section, we will create a simple Go program and test it to see if our installation works.

We will create a separate directory for our project as follows.

$ mkdir -p ~/go_projects/test

Next, we will navigate into the directory.

$ cd ~/go_projects/test

We will create a simple program called greetings.go that prints out a simple message on the terminal.

$ nano greetings.go

Copy and paste the following lines of code into the file.

package main
import "fmt"
func main() {
    fmt.Printf("Congratulations! Go has successfully been installed on your system\n")
}

Save the changes and exit. Then run the program as follows

$ go run greetings.go

You will get the following output. This confirms that the installation of Golang Go is working as expected.

Run-Go-Program-Ubuntu-Linux

Conclusion

You are now all set! Go has successfully been installed. In this guide, we have demonstrated how to install Go (Golang ) programming language on Linux.

The post How to Install Go (Golang) on Ubuntu Linux Step-by-Step first appeared on LinuxTechi.]]>
https://www.linuxtechi.com/install-go-golang-on-ubuntu-linux/feed/ 0