Just a quick question.
Recently I did a little optimising on a FreeBSD server and tested copying a 3GB file across the network, to a Windows machine. Windows' copy dialogue reported a burst speed of 122MB/s transfer speed before settling in a reported 100+MB/s area. Granted Windows' copy dialogue may not the most accurate of metrics.
This puzzled me, as I had understood that 'real life' transfer rates were deduced by dividing the connection speed by ten. (8 bits of data + 2 bits of start/stop signalling). Which would give a theoretical maximum speed of 100MB/s. So I Googled maximum transfer rates on Gigabit connections. Some reputable sites indicated that the maximum was indeed 125MB/s (1000 bits/8 bits).
Could anyone clarify if theoretical network speeds are divided by 8 or 10? It may be that I am still using outdated maths.
Recently I did a little optimising on a FreeBSD server and tested copying a 3GB file across the network, to a Windows machine. Windows' copy dialogue reported a burst speed of 122MB/s transfer speed before settling in a reported 100+MB/s area. Granted Windows' copy dialogue may not the most accurate of metrics.
This puzzled me, as I had understood that 'real life' transfer rates were deduced by dividing the connection speed by ten. (8 bits of data + 2 bits of start/stop signalling). Which would give a theoretical maximum speed of 100MB/s. So I Googled maximum transfer rates on Gigabit connections. Some reputable sites indicated that the maximum was indeed 125MB/s (1000 bits/8 bits).
Could anyone clarify if theoretical network speeds are divided by 8 or 10? It may be that I am still using outdated maths.