I have data that I want to be as separated as possible.
Currently my thought was to just create new pools each time I acquired a chunk of data. I realized this could potentially not be the correct approach if ZFS datasets are logically separated. Then it would make a lot more sense to just create new datasets anytime I acquired a chunk of data that I wanted to be separate. This data has to be separate for legal reasons.
So the gist of my question is does it make more sense to create separate pools or just create a single pool and create multiple datasets that logically separate the data. Really I guess what I'm asking all hinges on if ZFS datasets data touch between the different datasets.
Currently my thought was to just create new pools each time I acquired a chunk of data. I realized this could potentially not be the correct approach if ZFS datasets are logically separated. Then it would make a lot more sense to just create new datasets anytime I acquired a chunk of data that I wanted to be separate. This data has to be separate for legal reasons.
So the gist of my question is does it make more sense to create separate pools or just create a single pool and create multiple datasets that logically separate the data. Really I guess what I'm asking all hinges on if ZFS datasets data touch between the different datasets.