HOWTO: Distributed Object Storage with Minio on FreeBSD

n00b

New Member

Thanks: 1
Messages: 5

#2
Many thanks for your manual. Now i am trying to implement in our infrastructure for HA web-app.

Maybe you can point to description of quorum role minio node, i cant find this role description in official docs and your man as well..
 
OP
OP
vermaden

vermaden

Son of Beastie

Thanks: 1,042
Messages: 2,694

#3
Many thanks for your manual. Now i am trying to implement in our infrastructure for HA web-app.

Maybe you can point to description of quorum role minio node, i cant find this role description in official docs and your man as well..
Thanks ;)

... about that quorum role ... Minio does not support quorum typde nodes (or arbiters in MongoDB nomenclature).

I just named it like that 'logically' as this 3rd site - IN MY SCENARIO - where most nodes are in 1st and 2nd datacenter can be 'lost' without impact on the cluster work and if I lost only site A (1st) then this 'quorum' node serves that role to have more then half of the nodes, server quorum node.

The 'quorum' node in my setup still has data on it, but only 2/16 of that data, the 'data' nodes have 7/16 of data, which will be 14/16 for 'data' nodes and 2/16 for 'quorum' logical role node.

Hope that helps, sorry for misunderstanding.
 

n00b

New Member

Thanks: 1
Messages: 5

#4
Thanks ;)

... about that quorum role ... Minio does not support quorum typde nodes (or arbiters in MongoDB nomenclature).

I just named it like that 'logically' as this 3rd site - IN MY SCENARIO - where most nodes are in 1st and 2nd datacenter can be 'lost' without impact on the cluster work and if I lost only site A (1st) then this 'quorum' node serves that role to have more then half of the nodes, server quorum node.

The 'quorum' node in my setup still has data on it, but only 2/16 of that data, the 'data' nodes have 7/16 of data, which will be 14/16 for 'data' nodes and 2/16 for 'quorum' logical role node.

Hope that helps, sorry for misunderstanding.
Great community. Rapid answers. Proud to be part of that.

So, actually, if i have possibillity to store same amount of data on 3 nodes (3 dc), that will be fine too and less complicated? As i understood. minio continues to work with partial failure with n/2 nodes, that means that 1 of 2, 2 of 4, 3 of 6 and so on. In my understanding, that also means that there are no difference, am i using 2 or 3 nodes, cuz fail-safe is only to loose only 1 node in both scenarios. so better to choose 2 nodes or 4 from resource utilization viewpoint. I think i'll create 4 nodes, where 2 are located in first dc, and other 2 in second, all of them on separate hardware ( i have various dedicated hw in 2 different dc's for now). in such way covering both hw and overall dc failure.

p.s. sry for noobie questions, just gathering more info.
 
OP
OP
vermaden

vermaden

Son of Beastie

Thanks: 1,042
Messages: 2,694

#5
Great community. Rapid answers. Proud to be part of that.

So, actually, if i have possibillity to store same amount of data on 3 nodes (3 dc), that will be fine too and less complicated? As i understood.
Yes.

Great community. Rapid answers. Proud to be part of that.

minio continues to work with partial failure with n/2 nodes, that means that 1 of 2, 2 of 4, 3 of 6 and so on. In my understanding, that also means that there are no difference, am i using 2 or 3 nodes, cuz fail-safe is only to loose only 1 node in both scenarios. so better to choose 2 nodes or 4 from resource utilization viewpoint. I think i'll create 4 nodes, where 2 are located in first dc, and other 2 in second, all of them on separate hardware ( i have various dedicated hw in 2 different dc's for now). in such way covering both hw and overall dc failure.
Not quite, with n/2 nodes you only have READ ONLY support, with n/2 + 1 (more then half) you have READ/WRITE support.

That is why I created 3 nodes and not 2.

Also its not about the nodes, its about the disks. If each node has 1 disk, then its nodes = disks situation. But in my situation I have 2 disks for minio in 3rd site, 7 disks for minio in 2nd site and 7 disks for minio in 1st site.
 

n00b

New Member

Thanks: 1
Messages: 5

#6
Not quite, with n/2 nodes you only have READ ONLY support, with n/2 + 1 (more then half) you have READ/WRITE support.
Important correction. Now i got it.

Will think about how to create best solution for my case, maybe i will post some report about my experience. Thank you one more time,

p.s. i am testing on dedicated jails as you did, but on separate, geo-independent hosts. using ansible for mass-management and configuration.
 

n00b

New Member

Thanks: 1
Messages: 5

#8
Cannot achieve minio node inter-connections. Minio server start on 4 instances, but they do not see each other on application layer. :(
Any suggestions how can i debug minio?

My steps:

Allowed raw sockets on jail hosts, forwarded port 9000 from host to jail. I can see connections on each node from other nodes on interface via tcpdump. I can access web interface on each node with configured acces key and secret (same on all nodes). In my setup i created dir /home/minio on each node, which one i would use as content dir. Minio creates also these strange dir's (http:) in fs root, with domain/home/minio subdir (seems like i must use this path, not /home/minio which was created). starting minio with cmd while minio service is running:

minio --config-dir /usr/local/etc/minio/ server http://os0.domain:9000/home/minio http://os1.domain:9000/home/minio http://os2.domain:9000/home/minio http://os3.domain:9000/home/minio

on each host, then i get this message:

Waiting for a minimum of 2 disks to come online (%timer%)

left for all night, messages continues :(

On web interface i got error "Server not initialized, please try again"

I am using cbsd as jail wrapper. Also found on google config entry:

JSON:
        "file": {
                "enable": true,
                "fileName": "/var/log/minio.log",
                "level": "debug"
        },
but do not see any "debug" info in logfile, just the same "waiting" messages.

still no luck :( maybe some host restrctions.. will try to reproduce same on bhyve VM's
 

n00b

New Member

Thanks: 1
Messages: 5

#9
Finally, got it working.

My problem was:

1) used same custom config file without minio self-generated.
2) started with too complicated configuration, with second try just created 'storage' dir in fs root

Sharing ansible playbook:

YAML:
- hosts: minios
  vars:
    sysrc_disks: "sysrc minio_disks=\"http://node1:9000/storage http://node2:9000/storage http://node3:9000/storage http://node4:9000/storage\""
  gather_facts: no
  tasks:
    - name: install minio and python27 for ansible
      raw: pkg install -y python27 minio
      become: true

    - name: add minio to startup
      shell: sysrc minio_enable=YES
      become: true

    - name: add minio disks to rc.conf
      shell: "{{ sysrc_disks }}"
      become: true

    - name: create config dir
      file:
        path: /usr/local/etc/minio
        state: directory
        owner: minio
        group: minio
        recurse: yes
      become: true

    - name: create minio datadir
      file:
        path: /storage
        state: directory
        owner: minio
        group: minio
        recurse: no
      become: true

    - name: create logfile
      file:
        path: /var/log/minio.log
        state: touch
        owner: minio
        group: minio
      become: true
All the steps were similar with vermaden's manual.
Nodes are syncing, files uploading, will give to developers to try SDK.

as jail wrapper used cbsd, bit easier than manual creation with zfs-integration out-of-box. in such way i had great pssibilities to rollback my nodes to 'clean' state, so testing was pretty comfortable.

Interesting, that in my setup i have 4 nodes, where 3 nodes have large free space available, and 1 only 10 GB. in minio web 3 nodes show 20 gb free space (minio nodes disk limit?), 4th with 10 GB as expected. Will study why i have 20 gb only where available are TB's.
 
Top