How are packets from different applications assembled to go "over the wire"

"I'm learning about the TCP/IP stack. Say a host device has more than one app open, say apps A and B, and they each communicate with distant host devices over the internet. I assume that separate datagrams (packets) will be presented in the frames to the physical layer. But what part of the technology determines how those different frames are assembled to go "over the wire". Is it frame A then frame B repeating or is it random like A, A, B, A, B, B, etc.

I hope my question makes sense, thanks

I've searched many internet sites for an answer
 
This has nothing to do with the physical layer at all. It's governed by the vagaries of your operating system's scheduler and network stack.
 
Thanks Jose for your reply. I understand (I think) that the operating system will carry out "concrete" functions which can be attributed to the abstract layers of the TCP/IP model. So the OS will encapsulate the data from the apps and having "passed" it down the chain will place it "on the wire" for transmission over the internet. So I had assumed that action is attributable to the physical layer or is it the data link layer?. I will have to study some more to understand about the OS scheduler and network stack. Is the network stack something different to the TCP/IP stack?

Thanks again
 
Thanks SirDice for the YouTube link to the series of Networking Tutorials by Ben Eater. These are really good explanations and I'll study them all in detail.

I'm not sure at the moment if those tutorials will answer the question I asked in my first post. Jose has said the answer to my question is within the vagaries of the Scheduler in the Operating System. I will pursue that to understand it better.

Perhaps I should explain further. Here is my understanding which is perhaps too simplistic. If two Applications, A and B are open, the data from each is processed by the OS and is placed in a series of Frames (which also contain headers from intermediate layers). In terms of the OSI or TCP/IP model these Frames are at the Data Link layer. So there are two series of Frames, one for App A and one for App B. They will leave the device by the Physical Layer as a bitstream but as they are sharing a common path, the Frames from each App have to be "sequenced" in turn, or some other order, onto that path. So my query is, in what order is this done, what rules apply and at what layer of the model is the OS
completing this function?

Thanks
 
Is it frame A then frame B repeating or is it random like A, A, B, A, B, B, etc.
It will usually be first come first served. That is, whenever app A writes to its tcp socket, its data will be queued up. Whenever B writes to its TCP socket, B's data is queued up on the same outgoing queue. But a lot of other things can influence this. This is a fairly complex topic so no simple answer will be 100% correct!
Perhaps I should explain further. Here is my understanding which is perhaps too simplistic. If two Applications, A and B are open, the data from each is processed by the OS and is placed in a series of Frames (which also contain headers from intermediate layers). In terms of the OSI or TCP/IP model these Frames are at the Data Link layer. So there are two series of Frames, one for App A and one for App B. They will leave the device by the Physical Layer as a bitstream but as they are sharing a common path, the Frames from each App have to be "sequenced" in turn, or some other order, onto that path. So my query is, in what order is this done, what rules apply and at what layer of the model is the OS
completing this function?
Usually there will be one outgoing queue of *packets* (or data frames). All outgoing TCP/UDP traffic will eventually end up there. There may be more complex queueing implementations (for example class based queueing) where higher priority traffic or realtime traffic may be put on different queues which may get serviced earlier.
 
Thanks Jose for your reply. I can not think of another way to ask my question so I'll break it down further. I'll assume nothing but I will ask some further questions which I hope will make my thinking a bit clearer.

I'm reasonably certain I'm not thinking about the things you have sent the links for.

Q1 On a host device, say a laptop, does an open App (A) which communicates with the internet have its data processed through a series of encapsulations finally ending in a Frame

Q2. With App (A) still open, does another open App (B) which communicates with the internet, have its data processed through a series of encapsulations finally ending in a Frame.?

Q3. Can this processing be described as "passing the data down" the layers of the OSI or TCP/IP model?

Q4. Will there be a succession of A frames and B frames because there is more data from the app than can be accommodated in a single frame?

Q5. When the frames are converted to a bitstream (sorry an assumption there) to send over a common transmission path do the frames go out as a serial transmission? I mean, if (no doubt erroneously) I term a frame as a packet, does each packet occupy a different time on the line?

Q6. If the answer to Q5 is yes, then that is what I meant by "sequencing of packets" . They can't be on the line at the same time. But is there a rule in the processing that says packet(s) A has priority and should go out first, rather than packet(s) B ?

If my original question is still not clear I'll hold fire on any further posts until I can express it in a better way.

Thanks for your reply and patience
 
It will usually be first come first served. That is, whenever app A writes to its tcp socket, its data will be queued up. Whenever B writes to its TCP socket, B's data is queued up on the same outgoing queue. But a lot of other things can influence this. This is a fairly complex topic so no simple answer will be 100% correct!

Usually there will be one outgoing queue of *packets* (or data frames). All outgoing TCP/UDP traffic will eventually end up there. There may be more complex queueing implementations (for example class based queueing) where higher priority traffic or realtime traffic may be put on different queues which may get serviced earlier.
Thanks Bakul for your reply. I had not seen this before my latest reply to Jose.

That's great info re the queuing system and the possible prioritisation within that process.

If I wanted to align that queueing process to a layer of the OSI or TCP/IP model, would it be the Data Link layer?

Thanks
 
If I wanted to align that queueing process to a layer of the OSI or TCP/IP model, would it be the Data Link layer?
Not sure what you are asking. Typically each network device will have its own input/output queues -- they would be at layer2. What is the context of your questions? Are you already learning about TCP/IP from some books or tutorials?
 
Q1 On a host device, say a laptop, does an open App (A) which communicates with the internet have its data processed through a series of encapsulations finally ending in a Frame
Yes.

Q2. With App (A) still open, does another open App (B) which communicates with the internet, have its data processed through a series of encapsulations finally ending in a Frame.?
Yes.

Q3. Can this processing be described as "passing the data down" the layers of the OSI or TCP/IP model?
Whatever metaphor works for you.

Q4. Will there be a succession of A frames and B frames because there is more data from the app than can be accommodated in a single frame?
Breaking data up into packets has nothing to do with how many apps are running. Yes, in all but the most trivial of cases, the data sent or received by any process will be broken up into a series of packets. What will never happen is winding up with a packet that has data from both A and B, regardless of the volume sent by either.

Q5. When the frames are converted to a bitstream (sorry an assumption there) to send over a common transmission path do the frames go out as a serial transmission? I mean, if (no doubt erroneously) I term a frame as a packet, does each packet occupy a different time on the line?
Firstly, a frame is not the same as a packet. Packets happen at the layers that are independent from the physical layer. They are then further packaged into frames to be sent over the physical medium. An IP packet can be fragmented into more than one data layer frame. In baseband networks only one frame can be present on the medium at any given point in time. Broadband networks can accommodate multiple concurrent signals. See TDMA for example.

Q6. If the answer to Q5 is yes, then that is what I meant by "sequencing of packets" . They can't be on the line at the same time. But is there a rule in the processing that says packet(s) A has priority and should go out first, rather than packet(s) B ?
bakul already answered this.
 
Last edited:
The following sources are excellent reads:
The easiest read is the book by Kozierok. The Cromer books are a little dated as they don't cover IPv6 but still full of good information (IPv4, TCP, UDP, ICMP). The Stevens books are classics. I recommend any of them, or better, all of them. I've purchased the first two (Cromer and Kozierok). My go to is the Kozierok most of the time as it also covers IPv6.

To answer your question, specifically. As TCP is basically the same under IPv4 and IPv6 any of the books will tell you all you need to know about it. You can't go wrong with any of them.
 
Yes.


Yes.


Whatever metaphor works for you.


Breaking data up into packets has nothing to do with how many apps are running. Yes, in all but the most trivial of cases, the data sent or received by any service will be broken up into a series of packets. What will never happen is winding up with a packet that has data from both A and B, regardless of the volume sent by either.


Firstly, a frame is not the same as a packet. Packets happen at the layers that are independent from the physical layer. They are then further packaged into frames to be sent over the physical medium. An IP packet can be fragmented into more than one data layer frame. In baseband networks only one frame can be present on the medium at any given point in time. Broadband networks can accommodate multiple concurrent signals. See TDMA for example.


bakul already answered this.
Thanks for your detailed reply
 
Thanks for your reply. Yes I am a novice learner trying to understand how the concepts/functions of the layers of the OSI and TCP/IP models relate to the practical achievement of those functions in host devices and later in my studies the broader networking functions, routers etc. I am not studying for a qualification, just for my personal understanding. Thus far the source of my study material is internet educational articles and forum discussions such as this.
 
Back
Top