HOWTO: Distributed Object Storage with Minio on FreeBSD

n00b

New Member

Reaction score: 1
Messages: 5

Many thanks for your manual. Now i am trying to implement in our infrastructure for HA web-app.

Maybe you can point to description of quorum role minio node, i cant find this role description in official docs and your man as well..
 
OP
vermaden

vermaden

Son of Beastie

Reaction score: 1,751
Messages: 3,042

Many thanks for your manual. Now i am trying to implement in our infrastructure for HA web-app.

Maybe you can point to description of quorum role minio node, i cant find this role description in official docs and your man as well..

Thanks ;)

... about that quorum role ... Minio does not support quorum typde nodes (or arbiters in MongoDB nomenclature).

I just named it like that 'logically' as this 3rd site - IN MY SCENARIO - where most nodes are in 1st and 2nd datacenter can be 'lost' without impact on the cluster work and if I lost only site A (1st) then this 'quorum' node serves that role to have more then half of the nodes, server quorum node.

The 'quorum' node in my setup still has data on it, but only 2/16 of that data, the 'data' nodes have 7/16 of data, which will be 14/16 for 'data' nodes and 2/16 for 'quorum' logical role node.

Hope that helps, sorry for misunderstanding.
 

n00b

New Member

Reaction score: 1
Messages: 5

Thanks ;)

... about that quorum role ... Minio does not support quorum typde nodes (or arbiters in MongoDB nomenclature).

I just named it like that 'logically' as this 3rd site - IN MY SCENARIO - where most nodes are in 1st and 2nd datacenter can be 'lost' without impact on the cluster work and if I lost only site A (1st) then this 'quorum' node serves that role to have more then half of the nodes, server quorum node.

The 'quorum' node in my setup still has data on it, but only 2/16 of that data, the 'data' nodes have 7/16 of data, which will be 14/16 for 'data' nodes and 2/16 for 'quorum' logical role node.

Hope that helps, sorry for misunderstanding.

Great community. Rapid answers. Proud to be part of that.

So, actually, if i have possibillity to store same amount of data on 3 nodes (3 dc), that will be fine too and less complicated? As i understood. minio continues to work with partial failure with n/2 nodes, that means that 1 of 2, 2 of 4, 3 of 6 and so on. In my understanding, that also means that there are no difference, am i using 2 or 3 nodes, cuz fail-safe is only to loose only 1 node in both scenarios. so better to choose 2 nodes or 4 from resource utilization viewpoint. I think i'll create 4 nodes, where 2 are located in first dc, and other 2 in second, all of them on separate hardware ( i have various dedicated hw in 2 different dc's for now). in such way covering both hw and overall dc failure.

p.s. sry for noobie questions, just gathering more info.
 
OP
vermaden

vermaden

Son of Beastie

Reaction score: 1,751
Messages: 3,042

Great community. Rapid answers. Proud to be part of that.

So, actually, if i have possibillity to store same amount of data on 3 nodes (3 dc), that will be fine too and less complicated? As i understood.

Yes.

Great community. Rapid answers. Proud to be part of that.

minio continues to work with partial failure with n/2 nodes, that means that 1 of 2, 2 of 4, 3 of 6 and so on. In my understanding, that also means that there are no difference, am i using 2 or 3 nodes, cuz fail-safe is only to loose only 1 node in both scenarios. so better to choose 2 nodes or 4 from resource utilization viewpoint. I think i'll create 4 nodes, where 2 are located in first dc, and other 2 in second, all of them on separate hardware ( i have various dedicated hw in 2 different dc's for now). in such way covering both hw and overall dc failure.

Not quite, with n/2 nodes you only have READ ONLY support, with n/2 + 1 (more then half) you have READ/WRITE support.

That is why I created 3 nodes and not 2.

Also its not about the nodes, its about the disks. If each node has 1 disk, then its nodes = disks situation. But in my situation I have 2 disks for minio in 3rd site, 7 disks for minio in 2nd site and 7 disks for minio in 1st site.
 

n00b

New Member

Reaction score: 1
Messages: 5

Not quite, with n/2 nodes you only have READ ONLY support, with n/2 + 1 (more then half) you have READ/WRITE support.

Important correction. Now i got it.

Will think about how to create best solution for my case, maybe i will post some report about my experience. Thank you one more time,

p.s. i am testing on dedicated jails as you did, but on separate, geo-independent hosts. using ansible for mass-management and configuration.
 
OP
vermaden

vermaden

Son of Beastie

Reaction score: 1,751
Messages: 3,042

@ n00b
Good luck with your setup, share your thoughts after you finish.
 

n00b

New Member

Reaction score: 1
Messages: 5

Cannot achieve minio node inter-connections. Minio server start on 4 instances, but they do not see each other on application layer. :(
Any suggestions how can i debug minio?

My steps:

Allowed raw sockets on jail hosts, forwarded port 9000 from host to jail. I can see connections on each node from other nodes on interface via tcpdump. I can access web interface on each node with configured acces key and secret (same on all nodes). In my setup i created dir /home/minio on each node, which one i would use as content dir. Minio creates also these strange dir's (http:) in fs root, with domain/home/minio subdir (seems like i must use this path, not /home/minio which was created). starting minio with cmd while minio service is running:

minio --config-dir /usr/local/etc/minio/ server http://os0.domain:9000/home/minio http://os1.domain:9000/home/minio http://os2.domain:9000/home/minio http://os3.domain:9000/home/minio

on each host, then i get this message:

Waiting for a minimum of 2 disks to come online (%timer%)

left for all night, messages continues :(

On web interface i got error "Server not initialized, please try again"

I am using cbsd as jail wrapper. Also found on google config entry:

JSON:
        "file": {
                "enable": true,
                "fileName": "/var/log/minio.log",
                "level": "debug"
        },

but do not see any "debug" info in logfile, just the same "waiting" messages.

still no luck :( maybe some host restrctions.. will try to reproduce same on bhyve VM's
 

n00b

New Member

Reaction score: 1
Messages: 5

Finally, got it working.

My problem was:

1) used same custom config file without minio self-generated.
2) started with too complicated configuration, with second try just created 'storage' dir in fs root

Sharing ansible playbook:

YAML:
- hosts: minios
  vars:
    sysrc_disks: "sysrc minio_disks=\"http://node1:9000/storage http://node2:9000/storage http://node3:9000/storage http://node4:9000/storage\""
  gather_facts: no
  tasks:
    - name: install minio and python27 for ansible
      raw: pkg install -y python27 minio
      become: true

    - name: add minio to startup
      shell: sysrc minio_enable=YES
      become: true

    - name: add minio disks to rc.conf
      shell: "{{ sysrc_disks }}"
      become: true

    - name: create config dir
      file:
        path: /usr/local/etc/minio
        state: directory
        owner: minio
        group: minio
        recurse: yes
      become: true

    - name: create minio datadir
      file:
        path: /storage
        state: directory
        owner: minio
        group: minio
        recurse: no
      become: true

    - name: create logfile
      file:
        path: /var/log/minio.log
        state: touch
        owner: minio
        group: minio
      become: true

All the steps were similar with vermaden's manual.
Nodes are syncing, files uploading, will give to developers to try SDK.

as jail wrapper used cbsd, bit easier than manual creation with zfs-integration out-of-box. in such way i had great pssibilities to rollback my nodes to 'clean' state, so testing was pretty comfortable.

Interesting, that in my setup i have 4 nodes, where 3 nodes have large free space available, and 1 only 10 GB. in minio web 3 nodes show 20 gb free space (minio nodes disk limit?), 4th with 10 GB as expected. Will study why i have 20 gb only where available are TB's.
 

pebkac

New Member

Reaction score: 9
Messages: 19

Thank you very much for the documentation and starting this thread, vermaden!

I am looking into setting up minio and I am unsure about what the best approach would be to run it on top of ZFS. I understand that Erasure Encoding within minio makes it unnecessary to have ZFS-level redundancy (minio company whitepapers for Linux suggest XFS on each disk and adding each disk to minio).

So would I simply set up one zpool per disk and add these one-disk-zpools to minio? :-/
 
OP
vermaden

vermaden

Son of Beastie

Reaction score: 1,751
Messages: 3,042

P.S. Since your otherwise excellent document, vermaden, does not cover self signed TLS encryption which is a bit tricky for Distributed Mode, I created a short post explaining such a setup: https://honeyguide.eu/posts/minio/
Thanks, you covered it really well (the TLS setup).

I also really liked your other FreeBSD related articles on your blog :)

Regards,
vermaden
 
OP
vermaden

vermaden

Son of Beastie

Reaction score: 1,751
Messages: 3,042

Thank you very much for the documentation and starting this thread, vermaden!

I am looking into setting up minio and I am unsure about what the best approach would be to run it on top of ZFS. I understand that Erasure Encoding within minio makes it unnecessary to have ZFS-level redundancy (minio company whitepapers for Linux suggest XFS on each disk and adding each disk to minio).

So would I simply set up one zpool per disk and add these one-disk-zpools to minio? :-/

Sorry, I must have overlooked that, sorry for late response :(

Yes, Minio with its erasure coding takes care of checksums and bit rot problem so You do no need ZFS for checksums but You can still use ZFS for compression or pooling with read (L2ARC) or write (ZIL) caching. Having a single ZFS pool is enough.

Most guides of most software on Linux will not suggest using ZFS because of CDDL 'issue' in the GPL religion land.

That is why I like Ubuntu which comes with ZFS and you can even install Ubuntu on ZFS as root fileystem. They do not have Boot Environments but they at least have snapshots to which you can get back to.
 

pebkac

New Member

Reaction score: 9
Messages: 19

Sorry, I must have overlooked that, sorry for late response :(

Yes, Minio with its erasure coding takes care of checksums and bit rot problem so You do no need ZFS for checksums but You can still use ZFS for compression or pooling with read (L2ARC) or write (ZIL) caching. Having a single ZFS pool is enough.

Thanks a lot, vermaden, so you would put all the drives into one ZFS pool but with no redundancy? Or one pool for each drive like I had in mind?

Because one pool for all the drives but without redundancy would immediately put the complete pool offline as soon as there is one drive failing, right? That would be much worse that just one drive being offline.
 
OP
vermaden

vermaden

Son of Beastie

Reaction score: 1,751
Messages: 3,042

Thanks a lot, vermaden, so you would put all the drives into one ZFS pool but with no redundancy? Or one pool for each drive like I had in mind?

Because one pool for all the drives but without redundancy would immediately put the complete pool offline as soon as there is one drive failing, right? That would be much worse that just one drive being offline.

Depends how many drives you have. How many 'virtual' disks (or dirs) you would want to use for Minio ... and how many hosts/localizations.
 
Top