Posted on Leave a comment

Cannot delete file [Datastore] vmkdump.dumpfile

If you face an issue during the deletion of a dump file that reside on a Datastore, you should deactivate the dump on the host and then try to delete it.

You can get a list of used dumpfiles on your ESXi host by connecting with ssh and performing the below command:

esxcli system coredump file list

This will output the dumpfiles that are currently in use, which will have true on the active column.

In order to de allocate the dump file you should unset the dump file and then retry to delete either by command or with the GUI.

esxcli system coredump file set -u

The command that will force the deletion is the below:

esxcli system coredump file remove -f /vmfs/volumes/volume/vmkdump/11111111-2222-3333-4444-555555555555.dumpfile
Posted on Leave a comment

Automate your deployments with .gitlab-ci.yml and Openshift – Gitlab Devops

This article describes how to create a Gitlab CI/CD pipeline using gitlab-runner and docker as the build strategy in order to deploy microservices on Openshift.

On my previous articles I have explained how to create your own hosted gitlab instance and deploy a single CI/CD pipeline using gitlab-runner. The whole setup was based on containers, as a result the infrastructure needed can be deployed on Openshift as well.

The pipeline consists of three steps, housekeeping, staging and cleaning. It is based on the default example that gitlab provides and uses oc commands to communicate with Openshift. It is configured to be triggered only for develop branch and every time a new commit is added the build starts.

  • The housekeeping step will remove every resource that has been created from a previous build.
  • The staging step will build the microservices based on your Dockerfile instructions as the build strategy is set to docker.
  • The cleaning task will remove the building pods that have been created from Openshift.

The housekeeping step is allowed to fail so that if no resources are found, the building step will continue its work.

You can see below a simple run of the pipeline.

You can find the code of the pipeline in the below repository:

https://github.com/geralexgr/gitlab-cicd-openshift-deploy/blob/main/gitlab-ci.yml

Posted on Leave a comment

Extend xfs disk with parted command line – grow xfs partition

This article describes how to extend a physical disk which is formatted as XFS if LVM is not an option.

As shown below the /dev/sdb device is formatted as xfs and mounted under /test. Currently its capacity is 7GB.

locate the existing disk.

Lets extend the physical volume device by adding 2GB more on it.

extend disk on hypervisor

However capacity will not be extended on Linux because the physical disk need to be extended also with parted. As you can see the partition number is 1 and it is needed for the next commands.

parted /dev/sdb

Execute the below command:

parted -s -a opt /dev/sdb "resizepart 1 100%"

If not unmounted you will get the below warning. Proceed with the umount

umount /dev/sdb

Execute again the command for resizing. You should now have a successful result.

Verify the new capacity:

physical disk extended

Mount disk and grow its size

Verify the new capacity on the operating system