Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workshop #2 - Valentina Arias #35

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions 04_distributed_filesystem/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Created by https://www.toptal.com/developers/gitignore/api/vagrant
# Edit at https://www.toptal.com/developers/gitignore?templates=vagrant

### Vagrant ###
# General
.vagrant/

# Log files (if you are creating logs in debug mode, uncomment this)
# *.log

### Vagrant Patch ###
*.box

# End of https://www.toptal.com/developers/gitignore/api/vagrant
70 changes: 70 additions & 0 deletions 04_distributed_filesystem/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# Distributed File System (With Glusterfs)

![alt text](https://docs.gluster.org/en/v3/images/640px-GlusterFS_Architecture.png "gluster")
> Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.

## Volumes

- Distributed Glusterfs Volume
![img](https://cloud.githubusercontent.com/assets/10970993/7412364/ac0a300c-ef5f-11e4-8599-e7d06de1165c.png)

- Replicated Glusterfs Voume
![img2](https://cloud.githubusercontent.com/assets/10970993/7412379/d75272a6-ef5f-11e4-869a-c355e8505747.png)

- Striped Glusterfs Volume
![img3](https://cloud.githubusercontent.com/assets/10970993/7412379/d75272a6-ef5f-11e4-869a-c355e8505747.png)

### Glusterfs (Inicialización)

Into master node
```
$ sudo gluster peer probe node-1
$ sudo gluster peer probe node-2
$ gluster pool list
$ sudo gluster volume create gv0 replica 3 master:/gluster/data/gv0 node-1:/gluster/data/gv0 node-2:/gluster/data/gv0
$ sudo gluster volume set gv0 auth.allow 127.0.0.1
$ sudo gluster volume start gv0
```

Each node
```
$ sudo mount.glusterfs localhost:/gv0 /mnt
```

Para añadir un nuevo servidor

| Command | Description |
|---|---|
| gluster peer status | Consulte el estado del cluster |
| gluster peer probe node4 | Adicione el nuevo nodo |
| gluster volume status | Anote el nombre del volumen |
| gluster volume add-brick swarm-vols replica 5 node4:/gluster/data/swarm-vols | TODO: Verificar este comando |

Para remover un nodo del cluster se requiere primero remover sus bricks de los volumenes asociados

| Command | Description |
|---|---|
| gluster volume info | Consulte los identificadores de los bricks actuales |
| gluster volume remove-brick swarm-vols replica 1 node1:/gluster/data force | Remueve un brick de un volumen con dos replicas |
| gluster peer detach node1 | Remueve un nodo del cluster |

Eliminar un volumen

| Command | Description |
|---|---|
| gluster volume stop swarm-vols | Detenga el volumen |
| gluster volume delete swarm-vols | Elimine el volumen |


### References
* https://docs.gluster.org/en/v3/Administrator%20Guide/Managing%20Volumes/
* https://support.rackspace.com/how-to/recover-from-a-failed-server-in-a-glusterfs-array/
* https://support.rackspace.com/how-to/add-and-remove-glusterfs-servers/
* http://embaby.com/blog/using-glusterfs-docker-swarm-cluster/
* https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
* http://ask.xmodulo.com/create-mount-xfs-file-system-linux.html
* https://www.cyberciti.biz/faq/linux-how-to-delete-a-partition-with-fdisk-command/
* https://support.rackspace.com/how-to/getting-started-with-glusterfs-considerations-and-installation/
* https://everythingshouldbevirtual.com/virtualization/vagrant-adding-a-second-hard-drive/
* https://www.jamescoyle.net/how-to/351-share-glusterfs-volume-to-a-single-ip-address

62 changes: 62 additions & 0 deletions 04_distributed_filesystem/Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.

firstDisk = './firstDisk.vdi'
secondDisk = './secondDisk.vdi'
thirdDisk = './thirdDisk.vdi'
fourthDisk = './fourthDisk.vdi'
Vagrant.configure("2") do |config|

config.ssh.insert_key = false
config.vm.define "node_master" do |lb|
lb.vm.box = "generic/centos9s"
lb.vm.hostname = "master"
lb.vm.network "private_network", ip: "192.168.56.200"
lb.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "node_master"]
unless File.exist?(firstDisk)
vb.customize ['createhd', '--filename', firstDisk, '--variant', 'Fixed', '--size', 5 * 1024]
end
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', firstDisk]
end
lb.vm.provision "shell", path: "scripts/glusterfs.sh"
lb.vm.provision "shell", path: "scripts/configuration.sh"
end

config.vm.define "node1" do |node1|
node1.vm.box = "generic/centos9s"
node1.vm.hostname = "node-1"
node1.vm.network "private_network", ip: "192.168.56.11"
node1.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "node-1"]
unless File.exist?(secondDisk)
vb.customize ['createhd', '--filename', secondDisk, '--variant', 'Fixed', '--size', 5 * 1024]
end
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', secondDisk]
end
node1.vm.provision "shell", path: "scripts/glusterfs.sh"
node1.vm.provision "shell", path: "scripts/configuration.sh"
end

config.vm.define "node2" do |node2|
node2.vm.box = "generic/centos9s"
node2.vm.hostname = "node-2"
node2.vm.network "private_network", ip: "192.168.56.12"
node2.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "node-2"]
unless File.exist?(thirdDisk)
vb.customize ['createhd', '--filename', thirdDisk, '--variant', 'Fixed', '--size', 5 * 1024]
end
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', thirdDisk]
end
node2.vm.provision "shell", path: "scripts/glusterfs.sh"
node2.vm.provision "shell", path: "scripts/configuration.sh"
end

end

7 changes: 7 additions & 0 deletions 04_distributed_filesystem/ansible.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
[defaults]
inventory=./ansible_hosts
remote_user=vagrant
private_key_file=$HOME/.vagrant.d/insecure_private_key
host_key_checking=False
retry_files_enabled=False
#interpreter_python=auto_silent
6 changes: 6 additions & 0 deletions 04_distributed_filesystem/ansible_hosts
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[nodes]
node1 ansible_ssh_host=192.168.56.11
node2 ansible_ssh_host=192.168.56.12

[masters]
master ansible_ssh_host=192.168.56.200
21 changes: 21 additions & 0 deletions 04_distributed_filesystem/playbooks/01-master-conf.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
- hosts: master
become: true
tasks:
- name: Start the service
systemd:
name: glusterd
state: started

- name: Add node1
shell: sudo gluster peer probe node1

- name: Add node2
shell: sudo gluster peer probe node2

- name: Create the volume
shell: sudo gluster volume create gv0 replica 3 master:/data node1:/data node2:/data force

- name: Start it
shell: sudo gluster volume start gv0

19 changes: 19 additions & 0 deletions 04_distributed_filesystem/playbooks/02-node-conf.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
- hosts: nodes
become: true
tasks:
- name: Montar volumen GlusterFS
mount:
name: /mnt
fstype: glusterfs
opts: _netdev,defaults
src: "localhost:/gv0"
state: mounted
- hosts: node1
tasks:
- name: Create file
shell: echo "Hola desde node1" | sudo tee /mnt/saludo-node1.txt
- hosts: node2
tasks:
- name: Create file
shell: echo "Hola desde node2" | sudo tee /mnt/saludo-node2.txt
3 changes: 3 additions & 0 deletions 04_distributed_filesystem/scripts/configuration.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
echo "192.168.56.200 master" >> /etc/hosts
echo "192.168.56.11 node1" >> /etc/hosts
echo "192.168.56.12 node2" >> /etc/hosts
14 changes: 14 additions & 0 deletions 04_distributed_filesystem/scripts/glusterfs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
yum install -y centos-release-gluster
yum install -y glusterfs-server
# yum install -y xfsprogs
service glusterd start

sfdisk /dev/sdb << EOF
;
EOF

mkfs.xfs /dev/sdb1
mkdir -p /gluster/data
# echo "/dev/sdb1 /gluster/data xfs default 1 2" >> /etc/fstab
mount /dev/sdb1 /gluster/data/
#mount -a && mount
35 changes: 27 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,30 @@
# sd-workshop2 2022-1
sd workshop2

- Completar la lógica de la applicación de pagos de tal manera que al hacer un pago através del microservicio de pagos, el monto de las facturas sea correctamente debitado, es decir, actualmente si una factura debe 1000 y yo hago un pago por 400 a esa factura, el microservicio invoice me sobreescribe el 1000 por 400 en vez de mostrarme el saldo restante 1000-400=600.
- Completar la lógica de la aplicación de tal manera que haya 3 estados para las facturas. 0=debe 1=pagadoparcialmente 2=pagado
- Hacer que las applicaciones se puedan registrar con consul
- Debe ser un pull request a este repositorio sd-workshop2

Bonus:
- Subir las imagenes de la app a Docker hub
- Crear un script en bash que lance toda la aplicación.
En este workshop, se automatiza la configuracion de un servidor master y dos nodos de Gluster. Para esto, se utiliza la herramienta de Ansible para configurar las maquinas virtuales.

## 01-master-conf.yml
En primer lugar, se inicia el servicio de glusterd. Despues, secrea el archivo de configuracion del servidor master. En este archivo se establece la conexion con los nodos (1 y 2) usando un codigo en shell que realiza la instruccion de "gluster peer probe".

Una vez establecida la conexion con los nodos, se crea un volumen llamado "gv0" el cual se va a replicar en los 3 nodos (master, node1 y node2). Es decir, que la carpeta data va a tener los datos compartidos del almacenamiento distribuido.

Finalmente, se inicializa el volumen.

## 02-node-conf.yml
Para la configuracion de los nodos, se monta el volumen gv0 creado en el master en la carpeta mnt. Despues, se configura que para cada nodo se cree un archivo de texto.

Para probar si esta funcionando el gluster, simplemente se accede a la carpeta de data y ahi se pueden encontrar ambos archivos. :)

##¿Cómo probar?
Para probar la configuracion realizada en Ansible, simplemente se realiza:

````
$ vagrant up
$ iansible-playbook ./playbooks/01-master-conf.yml
$ ansible-playbook ./playbooks/02-node-conf.yml
$ vagrant ssh node1
$ cd /data/ | ls
````

Con esto podemos ver que se crearon ambos archivos y estos se replicaron en el directorio de /data