How can a 13 bit number represent 16 bit number?Or why is fragmentation offset multiple of 8?

Discussion in 'Other Advanced Math' started by shivajikobardan, May 22, 2022.

  1. shivajikobardan

    shivajikobardan

    Joined:
    Jan 8, 2022
    Messages:
    41
    Likes Received:
    6
    This is more of a mathematics confusion rather than a computer science confusion, so please treat it like that. I understand the computer science behind it(I hope so) but my confusion lies in the basic math part that is there.






    I am studying about ipv4 fragmentation.

    This is ipv4 header. The fields of concern here are-:

    -> identification(16 bit)


    -> Flags(3 bit)


    -> fragment offset(13 bit)


    -> Total length(16 bit)

    [​IMG]


    In order to understand these concepts of fragmentation, I have solved multiple problems. One of them is below.

    [​IMG]

    Here initially we have 5000 byte datagram of which 20 byte is header.

    The MTU is 1500 bytes.


    The answer is this-:

    [​IMG]

    But I am unable to realize this simple concept that I believe even a 5th grader can understand even though I am college level undergraduate student. It is being pretty shameful for me.I am leaving my shame aside and asking this instead of giving up. I have asked this everywhere and everyone says this is basic math but it is not clicking in my head. Imma memorize this lol. I can solve any numerical problem related to fragmentation but this curiosity isn’t letting me to learn further and go to ipv6.

    It would be immensely helpful to me if anyone can help me via that above example to understand this simple concept. I am unable to relate with this concept.

    I agree there are 2^13 offset values ie 0,1,2,3,4,5,6,7,8,9…..8191. So that is the total no. of fragments possible 8192 fragments. But datagram can be only as big as 2^16-1 bytes. But why are we dividing them? It doesn’t make any sense. Maybe it is trying to say that 8192 fragments need to span 65535 bytes which gives 7.99987793 as the answer. But I have calculated it later, and fragmentation doesn’t work like that.


    Opinion 1-:


    Max “true” fragment offset possible=sum of all previous DATA ONLY excluding header

    =65535-n*20 bytes.


    Where 65535 is the max total length possible.n is the number of fragments and 20 bytes is the header length

    The max value of offset possible is 8191. 65535-n*20 bytes should be represented by 8191 (both max values). But this opinion takes nowhere.



    Opinion 2-:

    Max. 13 bit number=8191

    Max 16 bit number=65535

    So 65535/8191=8.0000sthsth (not exactly 8).


    So the problem is like how can max offset value represent the max data size. But as i said in opinion 1, max data size can never be 65535 as header also takes up some space.


    Opinion 3–:

    Say i want to understand this example by a dummy example. Say fragmentation offset field is of 1 bit. And total length field is of 4 bit. What would happen?


    Here are some answers that I have read which gave me further confusions-:

    https://networkengineering.stackexc...26/why-ip-fragmentation-is-on-8-byte-boundary


    https://learningnetwork.cisco.com/s/question/0D53i00000Kt7dxCAB/fragment-offset-concept
     
    shivajikobardan, May 22, 2022
    #1
Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.
Similar Threads
There are no similar threads yet.
Loading...