Skip to main content

Install Gitlab-runner and execute your first pipeline

In this article we will see how to install gitlab-runner on Redhat Linux 6

What is Gitlab Runner? Below is the definition of Gitlab-runner as provided in Gitlab documentation:

"GitLab Runner is the open source project that is used to run your jobs and send the results back to GitLab. It is used in conjunction with Gitlab CI/CD, the open-source continuous integration service included with GitLab that coordinates the jobs"

As you can see in below screenshot that there are currently no runners registered with your Gitlab. In order to see all registered runners:

Goto your Gitlab console -> Your Project -> Settings -> CI / CD

Expand the Runners section and you will see a screen similar to below. Any registered runners will be visible in this section. You can't see any runners, as currently there are no runners registered.



Now let us see the steps to install Gitlab runner, register it with Gitlab and run your first pipeline:

Install Runner:

- Run the wget command to download gitlab-runner.

wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64


- Provide execute permission to the runner and create a directory which will be used by the runner to clone the repository. This directory is also known as build directory.

chmod +x /usr/local/bin/gitlab-runner
mkdir /home/gitlab-runner-builds


- Execute the below command to install and start the gitlab-runner.  Provide the working directory which was created in the previous step and a user name, which will be used to run the runner. I am using the root user, but it is recommended to create a separate user for the runner.

gitlab-runner install --working-directory /home/gitlab-runner-builds --user root
gitlab-runner start


Runner Registration

- Execute the below command to register the runner with your Gitlab server:

gitlab-runner register

You will need to provide below parameters:

gitlab-ci coordinator url: http or https url of your gitlab console
gitlab-ci token
    This can be found in the Gitlab console -> <Project Name> -> Settings -> CI/CD -> Runners
Descitption: Name that you want to give to your runner.
Tags: This is in key-value pair format. It is an optional parameter. 
ExecutorGitLab Runner implements a number of executors that can be used to run your builds in different scenarios. In this example, I am using shell executor.


Once your runner is installed you should be able to see the runner under Gitlab console:


Before executing our pipeline, let us understand about a special file "gitlab-ci.yml". Below is an extract from Gitlab documentation:

GitLab CI/CD pipelines are configured using a YAML file called .gitlab-ci.yml within each project.

The .gitlab-ci.yml file defines the structure and order of the pipelines and determines:

  • What to execute using GitLab Runner.
  • What decisions to make when specific conditions are encountered. For example, when a process succeeds or fails.
Execute your first Pipeline:

- Goto your Gitlab console -> Project -> Repository -> Files -> New File


- Click on "Select a template type" -> .gitlab-ci.yml


- Under Template types, select a template which works with your runner. As I have used a shell runner, so I have selected the "Bash" template.


- Click commit.

As soon as you commit your template, it will execute your pipeline and all stages will be executed.










Comments

Popular posts from this blog

Configure Oracle ASM Disks on AIX

Configure Oracle ASM Disks on AIX You can use below steps to configure the new disks for ASM after the raw disks are added to your AIX server by your System/Infrastructure team experts: # /usr/sbin/lsdev -Cc disk The output from this command is similar to the following: hdisk9 Available 02-T1-01 PURE MPIO Drive (Fibre) hdisk10 Available 02-T1-01 PURE MPIO Drive (Fibre) If the new disks are not listed as available, then use the below command to configure the new disks. # /usr/sbin/cfgmgr Enter the following command to identify the device names for the physical disks that you want to use: # /usr/sbin/lspv | grep -i none This command displays information similar to the following for each disk that is not configured in a volume group: hdisk9     0000014652369872   None In the above example hdisk9 is the device name and  0000014652369872  is the physical volume ID (PVID). The disks that you want to use may have a PVID, but they must not belong to a volu...

Adding New Disks to Existing ASM Disk Group

Add Disks to Existing ASM Disk Group In this blog I will show how to add new disks to an existing ASM Disk group. This also contains the steps to perform the migration from existing to the new storage system. In order to add the disk to the ASM disk group, you will first need to configure these disk using the operating system commands. I have provided the steps to configure the disks on AIX system in my blog " Configure Oracle ASM Disks on AIX" Adding New Disks to DATA Disk Group (Storage Migration for DATA Disk Group) Login to your ASM instance $ sqlplus / as sysasm If the name of the new disk is in different format from the existing disk, the modify the asm_diskstring parameter to identify the new disks. In my below example /dev/ora_data* is the format of the existing disks and /dev/new_disk* is the naming format of the newly configured disks. You should not modify this parameter unless the naming format changes. SQL> alter system set asm_diskstring = '/dev/ora_data*...

Load records from csv file in S3 file to RDS MySQL database using AWS Data Pipeline

 In this post we will see how to create a data pipeline in AWS which picks data from S3 csv file and inserts records in RDS MySQL table.  I am using below csv file which contains a list of passengers. CSV Data stored in the file Passenger.csv Upload Passenger.csv file to S3 bucket using AWS ClI In below screenshot I am connecting the RDS MySQL instance I have created in AWS and the definition of the table that I have created in the database testdb. Once we have uploaded the csv file we will create the data pipeline. There are 2 ways to create the pipeline.  Using "Import Definition" option under AWS console.                    We can use import definition option while creating the new pipeline. This would need a json file which contains the definition of the pipeline in the json format. You can use my Github link below to download the JSON definition: JSON Definition to create the Data Pipeline Using "Edit Architect" ...