CENTOS PACKAGE REPOSITORY

CENTOS PACKAGE REPOSITORY

Anyone that is running a modern Operating System is most likely utilizing TCP/IP to send and receive data. Modern TCP/IP stacks are somewhat complex and have a slew of tunables to control their behavior. The choice of when and when not to tune is not always super clear cut, since documentation and the advice of various network “experts” doesn’t always jive.

 

When I’m looking into performance problems that are network related one of the first things I review is the netstat “-s” output:

 

$ netstat -s

Ip:

    25030820 total packets received

    269 with invalid addresses

    0 forwarded

    0 incoming packets discarded

    21629075 incoming packets delivered

    21110503 requests sent out

Icmp:

    12814 ICMP messages received

    0 input ICMP message failed.

    ICMP input histogram:

        destination unreachable: 2

        echo requests: 12809

        echo replies: 3

    12834 ICMP messages sent

    0 ICMP messages failed

    ICMP output histogram:

        destination unreachable: 22

        echo request: 3

        echo replies: 12809

IcmpMsg:

        InType0: 3

        InType3: 2

        InType8: 12809

        OutType0: 12809

        OutType3: 22

        OutType8: 3

Tcp:

    138062 active connections openings

    1440632 passive connection openings

    7 failed connection attempts

    2262 connection resets received

    8 connections established

    12225207 segments received

    10785279 segments send out

    10269 segments retransmited

    0 bad segments received.

    69115 resets sent

Udp:

    553643 packets received

    22 packets to unknown port received.

    0 packet receive errors

    6911684 packets sent

UdpLite:

TcpExt:

    33773 invalid SYN cookies received

    154132 TCP sockets finished time wait in fast timer

    6 time wait sockets recycled by time stamp

    72284 delayed acks sent

    3 delayed acks further delayed because of locked socket

    Quick ack mode was activated 269 times

    3359 packets directly queued to recvmsg prequeue.

    2592713 packets directly received from backlog

    4021 packets directly received from prequeue

    3557638 packets header predicted

    1732 packets header predicted and directly queued to user

    1939991 acknowledgments not containing data received

    3179859 predicted acknowledgments

    1631 times recovered from packet loss due to SACK data

    Detected reordering 1034 times using FACK

    Detected reordering 1007 times using SACK

    Detected reordering 622 times using time stamp

    1557 congestion windows fully recovered

    4236 congestion windows partially recovered using Hoe heuristic

    299 congestion windows recovered after partial ack

    2 TCP data loss events

    5 timeouts after SACK recovery

    5 timeouts in loss state

    2511 fast retransmits

    2025 forward retransmits

    88 retransmits in slow start

    5518 other TCP timeouts

    295 DSACKs sent for old packets

    35 DSACKs sent for out of order packets

    251 DSACKs received

    25247 connections reset due to unexpected data

    2248 connections reset due to early user close

    6 connections aborted due to timeout

    TCPSACKDiscard: 2707

    TCPDSACKIgnoredOld: 65

    TCPDSACKIgnoredNoUndo: 12

    TCPSackShifted: 4176

    TCPSackMerged: 2301

    TCPSackShiftFallback: 98834

IpExt:

    InMcastPkts: 2

    OutMcastPkts: 3390453

    InBcastPkts: 8837402

    InOctets: 5156017179

    OutOctets: 2509510134

    InMcastOctets: 80

    OutMcastOctets: 135618120

    InBcastOctets: 2127986990

The netstat output contains a slew of data you can be used to see how much data your host is processing, if it’s accepting and processing data efficiently and if the buffers that link the various layers (Ethernet -> IP -> TCP -> APP) are working optimally.

When I build new Linux machines via kickstart, I make sure my profile contains the ktune package. That is all the tuning I do to start, unless an application or database requires a specific setting (think large pages and SysV IPC settings for Oracle).

Once I’ve met with an application resource and a business analyst, I like to pound the application with a representative benchmark and compare the system performance before and after the stress test was run. By comparing the before and after results I can see where exactly the system is choking (this is very rare), or if the application needs to be modified to accommodate additional load. If the application is a standard TCP/IP based application that utilizes HTTP, I’ll typically turn to siege and iPerf to stress my applications and systems.

If during load-testing I notice that data is being dropped in one or more queues, I’ll fire up dropwatch to observe where in the TCP/IP stack data is being dropped:

 

$ dropwatch -l kas

Initalizing kallsyms db

dropwatch> start

Enabling monitoring...

Kernel monitoring activated.

Issue Ctrl-C to stop monitoring

1 drops at netlink_sendskb+14d (0xffffffff813df30e)

1 drops at ip_rcv_finish+32e (0xffffffff813f0c93)

4 drops at ip_local_deliver+291 (0xffffffff813f12d7)

64 drops at unix_stream_recvmsg+44a (0xffffffff81440fb9)

32 drops at ip_local_deliver+291 (0xffffffff813f12d7)

23 drops at unix_stream_recvmsg+44a (0xffffffff81440fb9)

1 drops at ip_rcv_finish+32e (0xffffffff813f0c93)

4 drops at .brk.dmi_alloc+1e60bd47 (0xffffffffa045fd47)

2 drops at skb_queue_purge+60 (0xffffffff813b6542)

64 drops at unix_stream_recvmsg+44a (0xffffffff81440fb9)

 

This allows you to see if data is being dropped at the link layer, the IP layer, the UDP/TCP layer or the application layer. If the drops are occurring somewhere in TCP/IP (i.e. inside the kernel) I will review the kernel documentation and source code to see what occurs at the specific areas of the kernel listed in the dropwatch output, and find the sysctl values that control the sizes of the buffers at that layer (some are dynamic, some are fixed).

 

Tuning applications to perform optimally has filled dozens and dozens of books, and it’s a fine art that you learn from seeing problems erupt in the field. It also helps to know how to intepret all the values in the netstat output, and I cannot recommend TCP/IP Volume I, TCP/IP Volume II and TCP/IP Volume III enough! Everyone who runs an IP connected system should be required to read these before they are allowed access to the system. :)

 

This article was posted by Matty on 2011-07-11 19:06:00 -0400 -0400

 

While poking around the CentOS package repository, I came across the ktune package. Ktune comes with a set of kernel tunables that are useful for network and disk intensive workloads, and provides the ktune service to apply these settings during system startup. Ktune includes settings for TCP/IP buffers, setting the deadline scheduler as the default I/O scheduler, and entries to adjust the swappiness, dirty_ratio and pagecache settings. The full list of tunables can be viewed by paging through the following two configuration files:

 

$ less /etc/sysctl.ktune

$ less /etc/sysconfig/ktune

To activate the settings, you can enable the ktune service with the chkconfig and service utilities:

$ chkconfig ktune on

$ service ktune start

 

Saving current sysctl settings: [ OK ]

Applying ktune sysctl settings from /etc/sysctl.ktune: [ OK ]

Applying sysctl settings from /etc/sysctl.conf: [ OK ]

Applying deadline elevator: sda [ OK ]

 

This is an awesome package, and I definitely plan to use the network settings on all of my CentOS hosts.

 

This article was posted by Matty on 2009-04-26 12:06:00 -0400 -0400

 

 

https://prefetch.net/blog/index.php/2009/04/26/performance-tuning-linux-servers-with-ktune/