Linux · September 3, 2025

Optimizing Network Performance: Understanding Linux Kernel Network Interrupt Handling

The Linux kernel’s network interrupt handling is a critical component of efficient network data processing. This article provides a detailed, technical exploration of how the Linux kernel manages network interrupts, focusing on the mechanisms for receiving and processing network packets. By understanding the interplay between hardware interrupts, soft interrupts, and protocol handling, system administrators and developers can optimize network performance for high-throughput environments.

Overview of Network Packet Processing

When a network interface card (NIC) receives data from the network, it triggers a hardware interrupt to notify the CPU. The CPU then invokes the NIC driver’s registered interrupt handler, such as ei_interrupt for the NS8390 driver. To prevent interrupt loss and ensure system responsiveness, Linux divides interrupt handling into two phases:

  • Top Half: Executes quickly with hardware interrupts disabled to minimize latency and prevent interrupt loss.
  • Bottom Half: Handles time-consuming tasks with interrupts enabled, allowing other interrupts to be processed.

This article focuses on the bottom half, specifically the soft interrupt mechanism that processes network packets after the initial interrupt handling.

Packet Queuing with netif_rx

The netif_rx function is pivotal in queuing incoming packets for further processing. It performs the following tasks:

  1. Retrieve CPU-Specific Queue: Obtains the current CPU’s packet queue (softnet_data) to store incoming packets.
  2. Check Queue Limits: Ensures the queue length does not exceed netdev_max_backlog, a configurable threshold that prevents queue overflow.
  3. Queue Packet: Adds the packet (sk_buff) to the tail of the queue using __skb_queue_tail.
  4. Trigger Soft Interrupt: Initiates the bottom half processing by raising a soft interrupt (NET_RX_SOFTIRQ).
  5. Handle Overflows: Discards packets if the queue exceeds netdev_max_backlog, preventing system overload.

Key Considerations

  • Queue Management: Each CPU maintains its own packet queue, ensuring scalability in multi-core systems.
  • Performance Tuning: Adjusting netdev_max_backlog can optimize performance for high-traffic scenarios but requires careful tuning to avoid packet drops.

Process Flow

StepDescriptionOutcome
1. Retrieve QueueFetches CPU-specific softnet_dataEnsures per-CPU processing
2. Check Queue SizeCompares queue length to netdev_max_backlogPrevents overflow
3. Enqueue PacketAdds packet to queuePrepares for bottom half
4. Trigger SoftIRQRaises NET_RX_SOFTIRQInitiates bottom half processing
5. Handle OverflowDrops packet if queue is fullMaintains system stability

Bottom Half Processing with net_rx_action

The net_rx_action function, executed as part of the NET_RX_SOFTIRQ soft interrupt, processes queued packets. Its key responsibilities include:

  1. Dequeue Packet: Retrieves a packet from the CPU’s softnet_data queue using __skb_dequeue.
  2. Identify Protocol: Extracts the network layer protocol type from the packet’s Ethernet header (skb->protocol).
  3. Route to Handler: Matches the protocol type to a registered handler in the ptype_base array and invokes the corresponding function, such as ip_rcv for IP packets.

Protocol Handler Registration

Network layer protocols are registered in the ptype_base array, a hash table indexed by protocol type. The registration process, handled by dev_add_pack, accommodates conflicts by linking handlers via a next pointer. For example:

  • IP Protocol Registration:
    static struct packet_type ip_packet_type = {
        .type = __constant_htons(ETH_P_IP),
        .func = ip_rcv,
        .dev = NULL
    };
    
    void __init ip_init(void) {
        dev_add_pack(&ip_packet_type);
    }
    

Key Considerations

  • Scalability: The hash table structure supports multiple protocols, with collisions resolved via linked lists.
  • Performance: Disabling interrupts briefly during dequeuing ensures thread safety without significant latency.

IP Packet Processing with ip_rcv

For IP packets, the ip_rcv function performs initial validation before further processing:

  1. Packet Type Check: Discards packets not destined for the host (PACKET_OTHERHOST).
  2. Length Validation: Ensures the packet length matches the IP header’s specified length.
  3. Header Integrity: Verifies the IP header’s version, length, and checksum.
  4. Forward to Next Stage: If valid, passes the packet to ip_rcv_finish via the Netfilter hook NF_IP_PRE_ROUTING.

ip_rcv_finish and Routing

The ip_rcv_finish function sets up routing information:

  • Route Lookup: Calls ip_route_input to determine the packet’s destination if not already cached.
  • Local Delivery: If the packet is for the local host, invokes ip_local_deliver.

IP Fragment Handling

The ip_local_deliver function handles IP fragments:

  • Fragment Reassembly: Calls ip_defrag to reassemble fragmented packets.
  • Forward to Transport Layer: Passes reassembled packets to ip_local_deliver_finish.

Transport Layer Processing

The ip_local_deliver_finish function routes packets to the appropriate transport layer protocol:

  1. Protocol Identification: Extracts the transport protocol type from the IP header.
  2. Handler Lookup: Retrieves the handler from the inet_protos array, a hash table of transport protocol handlers.
  3. Invoke Handler: Calls the protocol-specific handler, such as tcp_v4_rcv for TCP packets.

Transport Protocol Registration

Transport layer handlers are registered in the inet_protos array, similar to ptype_base. For example:

  • TCP Handler:
    static struct inet_protocol tcp_protocol = {
        .handler = tcp_v4_rcv,
        .protocol = IPPROTO_TCP
    };
    

Best Practices for Optimization

To optimize Linux network interrupt handling:

  • Tune netdev_max_backlog: Increase for high-traffic systems, but monitor for packet drops.
  • Use Multi-Queue NICs: Leverage Receive Packet Steering (RPS) to distribute interrupts across CPUs.
  • Monitor SoftIRQ Load: Ensure soft interrupt processing does not overwhelm CPU resources.
  • Enable NAPI: Use the New API (NAPI) to reduce interrupt overhead in high-throughput scenarios.

Conclusion

Understanding Linux kernel network interrupt handling is essential for optimizing network performance. The division of interrupt handling into top and bottom halves ensures efficient processing, with netif_rx queuing packets and net_rx_action routing them to protocol handlers. IP packet processing, from validation to transport layer handoff, is streamlined for reliability and scalability. By applying best practices, system administrators can achieve robust network performance in demanding environments.