Unless your disks are over 10 years old, they probably do already have 4K sectors in hardware. Just as SirDice said.
The other thing is that for most workloads, the lost of disk space is not relevant. Yes, you may have many small files, and wasting a little extra space for each small file may look like a lot of waste. But the bulk of your disk space use is probably not coming from those many small files, but from a smallish number of large files. For most real-world workloads, using 4K blocks in the file system is a net gain in performance, with a negligible loss of capacity. There are exceptions, for example supercomputing cluster workloads for certain forms of genetic (biochemistry) data sets, which do create billions of tiny files. But you are probably not running those workloads.
If you feel like doing this scientifically, try this: make a little script that makes a list of all file sizes (pretty easy to do with find and xargs), then bin the file sizes by powers of 2, and count them for each bin. Then draw yourself the distribution of file sizes, weighted by file size. Most likely, you will see that stuff below a few kB is just not relevant.