Перейти до

NAS rscript на 10G мережевих


Рекомендованные сообщения

34 минуты назад, KaYot сказал:

Спрошу еще раз, а накуя бриджить вланы? Это такой ебнутый способ их терминировать не изучая как вообще работает маршрутизация и что такое VLAN, да?

подскажи как правильно сделать, что-бы на любой влан можно было назначить любую подсеть

Ссылка на сообщение
Поделиться на других сайтах
  • Відповіді 55
  • Створено
  • Остання відповідь

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Выкладываю, все, что важно. Остальное пофиг.   Сервер без dummynet.   CPU: Intel(R) Xeon(R) E-2288G CPU @ 3.70GHz (3696.31-MHz K8-class CPU) less /boot/loader.conf ipmi_load="

FreeBSD 12.2 Да ладно. Хоть один костыль в студию. ifconfig lo0 192.168.1.1/24 alias route add 192.168.1.2/32 -iface vlan100   И где костыль? В rc.conf прописал всё пра

Спрошу еще раз, а накуя бриджить вланы? Это такой ебнутый способ их терминировать не изучая как вообще работает маршрутизация и что такое VLAN, да?

Posted Images

22 минуты назад, skybetik сказав:

O0 ві єто серьезно ? 

Ну как минимум прерывания cpuset-ix-iflib  loader, sysctl  ну там буфера ) 

це круто але тестити на 1500 живих абонентах це все з гугла якось нехочеться

робочого тюнінгу на 10Г немаєте?

Ссылка на сообщение
Поделиться на других сайтах
Опубліковано: (відредаговано)
7 минут назад, a_n_h сказав:

подскажи как правильно сделать, что-бы на любой влан можно было назначить любую подсеть

аннамберед ціско 

 

На влан одну мережу по феншую буде

і буде маршурутизація між вланами, до цього треба стремится.

 

але будувалось все в перемішку і влом це все приводити до ума.

Відредаговано mgo
Ссылка на сообщение
Поделиться на других сайтах
12 минут назад, mgo сказав:

це круто але тестити на 1500 живих абонентах це все з гугла якось нехочеться

робочого тюнінгу на 10Г немаєте?

Его и не будет,каждый сам под задачи крутит. Буду у компа накидаю в пм.

Ссылка на сообщение
Поделиться на других сайтах
30 минут назад, a_n_h сказал:

подскажи как правильно сделать, что-бы на любой влан можно было назначить любую подсеть

Есть такая технология, ip unnumbered называется. В бсд можно реализовать костылями несложными.

Ссылка на сообщение
Поделиться на других сайтах
11 минут назад, mgo сказал:

брідж і є одним із тих костилів

Нет.

Костыль это пара скриптов реализующих нужную технологию(для unnumbered - маршрут на ip прописывается на нужный vlan вместо прописывания ip на интерфейс). Технология при этом работает штатно, но для работы нужны костыли.

А бридж для терминации вланов это не костыль, это сумрачный гений рыгающих пони.

Відредаговано KaYot
Ссылка на сообщение
Поделиться на других сайтах
1 час назад, skybetik сказал:

Есть разные схемы ну и как минимум я боюсь все что больше 2 интерфейсов ) да и не хочу знать что в нем творится.

Вступать в полемику я не буду у каждого свой путь )

А расскажите как правильно организовать? Есть около 50 VLAN. На каждый порт OLT отдельный VLAN. Система FreeBSD 12.1, трафика почти 2Г, планируется переключение на 10Г.

 

Ссылка на сообщение
Поделиться на других сайтах
8 минут назад, NETOS сказав:

А расскажите как правильно организовать? Есть около 50 VLAN. На каждый порт OLT отдельный VLAN. Система FreeBSD 12.1, трафика почти 2Г, планируется переключение на 10Г.

 

Я же написал что в полемику не вступаю,каждый для себя выбирает способы (Правильно так как тебе удобно).

Ссылка на сообщение
Поделиться на других сайтах
13 часов назад, skybetik сказал:

Я же написал что в полемику не вступаю,каждый для себя выбирает способы (Правильно так как тебе удобно).

Так я попросил совета, как у вас сделано. Можете в личку написать. Сам стою перед решением запустить бридж. 

Ссылка на сообщение
Поделиться на других сайтах
2 часа назад, NETOS сказав:

Так я попросил совета, как у вас сделано. Можете в личку написать. Сам стою перед решением запустить бридж. 

Правильней будет ip unnumbered, я выбрал bridge все го лишь из личніх соображений и удобств для себя.

 

Ссылка на сообщение
Поделиться на других сайтах

chart3.png.b9f777cb75589a9b792b162ee2ef7660.png1743366962_chart3(1).thumb.png.c504bca751339b7ece9f168376c3fea3.png

Без dummynet. NAT.

 

Крутите тюнинг.

 

Такой-же сервер dummynet. NAT:

1889501600_chart3(2).png.bb1122db97175658cbf5900e75a0eda6.png1474336334_chart3(3).thumb.png.49a578cd00bcca0b00f99a6cff4b86f5.png

20 часов назад, KaYot сказал:

Есть такая технология, ip unnumbered называется. В бсд можно реализовать костылями несложными.

Никаких костылей. Все стандартными средствами. 

  • Thanks 1
Ссылка на сообщение
Поделиться на других сайтах
49 минут назад, Pautiina сказал:

Никаких костылей. Все стандартными средствами. 

Полностью стандартными средствами маршрут на клиента не повесится, это и есть мелкие костыли.

Ссылка на сообщение
Поделиться на других сайтах
22 часа назад, skybetik сказав:

Буду у компа накидаю в пм

и мне если не затруднит Вас

годину тому, Pautiina сказав:

Крутите тюнинг.

а какая ОС стоит?

Ссылка на сообщение
Поделиться на других сайтах
2 часа назад, WideAreaNetwork сказал:

и мне если не затруднит Вас

а какая ОС стоит?

FreeBSD 12.2

2 часа назад, KaYot сказал:

Полностью стандартными средствами маршрут на клиента не повесится, это и есть мелкие костыли.

Да ладно. Хоть один костыль в студию.

ifconfig lo0 192.168.1.1/24 alias

route add 192.168.1.2/32 -iface vlan100

 

И где костыль?

В rc.conf прописал всё правильно и живи

 

 

Відредаговано Pautiina
  • Like 1
  • Thanks 1
  • Confused 1
Ссылка на сообщение
Поделиться на других сайтах
13 минут назад, WideAreaNetwork сказал:

могу попросить поделиться cpuset-ix-iflib  loader, sysctl и т.д., здесь или в личке

таких будет много, и мне тоже если не сюда то в личку...

Ссылка на сообщение
Поделиться на других сайтах
32 минуты назад, WideAreaNetwork сказал:

могу попросить поделиться cpuset-ix-iflib  loader, sysctl и т.д., здесь или в личке

 

17 минут назад, a_n_h сказал:

таких будет много, и мне тоже если не сюда то в личку...

 

Выкладываю, все, что важно. Остальное пофиг.

 

Сервер без dummynet.

 

CPU: Intel(R) Xeon(R) E-2288G CPU @ 3.70GHz (3696.31-MHz K8-class CPU)

less /boot/loader.conf

ipmi_load="YES"

# BSDRP
hw.em.rx_process_limit="-1"
hw.igb.rx_process_limit="-1"
hw.ix.rx_process_limit="-1"

# Allow unsupported SFP
hw.ix.unsupported_sfp="1"
hw.ix.allow_unsupported_sfp="1"

# https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards#TSO.2FLRO
hw.ix.flow_control=0

# H-TCP Congestion Control for a more aggressive increase in speed on higher
# latency, high bandwidth networks with some packet loss.
#cc_htcp_load="YES"

net.link.ifqmaxlen="16384"  # (default 50)

# qlimit for igmp, arp, ether and ip6 queues only (netstat -Q) (default 256)
net.isr.defaultqlimit="4096" # (default 256)

# increase the number of network mbufs the system is willing to allocate.  Each
# cluster represents approximately 2K of memory, so a value of 524288
# represents 1GB of kernel memory reserved for network buffers. (default
# 492680)
kern.ipc.nmbclusters="5242880"
kern.ipc.nmbjumbop="2621440"

# Size of the syncache hash table, must be a power of 2 (default 512)
net.inet.tcp.syncache.hashsize="1024"

# Limit the number of entries permitted in each bucket of the hash table. (default 30)
net.inet.tcp.syncache.bucketlimit="100"

# limit per-workstream queues (use "netstat -Q" if Qdrop is greater then 0
# increase this directive) (default 10240)
net.isr.maxqlimit="1000000"
less /etc/sysctl.conf

# $FreeBSD: stable/12/sbin/sysctl/sysctl.conf 337624 2018-08-11 13:28:03Z brd $
#
#  This file is read when going to multi-user and its contents piped thru
#  ``sysctl'' to adjust kernel values.  ``man 5 sysctl.conf'' for details.
#

# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
#security.bsd.see_other_uids=0

# Tunning from INTERNET+UBILLING
net.inet.ip.intr_queue_maxlen=10240
# For ipfw dynamic rule
net.inet.ip.fw.dyn_max=65535
net.inet.ip.fw.dyn_buckets=2048
net.inet.ip.fw.dyn_syn_lifetime=10
net.inet.ip.fw.dyn_ack_lifetime=120


# FreeBSD 12.1 .:. /etc/sysctl.conf .:. version 0.66
# https://calomel.org/freebsd_network_tuning.html

# TCP Tuning: The throughput of connection is limited by two windows: the
# (Initial) Congestion Window and the TCP Receive Window (RWIN). The Congestion
# Window avoids exceeding the capacity of the network (RACK, CAIA, H-TCP or
# NewReno congestion control); and the Receive Window avoids exceeding the
# capacity of the receiver to process data (flow control). When our server is
# able to process packets as fast as they are received we want to allow the
# remote sending host to send data as fast as the network, Congestion Window,
# will allow. https://en.wikipedia.org/wiki/TCP_tuning

# IPC Socket Buffer: the maximum combined socket buffer size, in bytes, defined
# by SO_SNDBUF and SO_RCVBUF. kern.ipc.maxsockbuf is also used to define the
# window scaling factor (wscale in tcpdump) our server will advertise. The
# window scaling factor is defined as the maximum volume of data allowed in
# transit before the recieving server is required to send an ACK packet
# (acknowledgment) to the sending server. FreeBSD's default maxsockbuf value is
# two(2) megabytes which corresponds to a window scaling factor (wscale) of
# six(6) allowing the remote sender to transmit up to 2^6 x 65,535 bytes =
# 4,194,240 bytes (4MB) in flight, on the network before requiring an ACK
# packet from our server. In order to support the throughput of modern, long
# fat networks (LFN) with variable latency we suggest increasing the maximum
# socket buffer to at least 16MB if the system has enough RAM. "netstat -m"
# displays the amount of network buffers used. Increase kern.ipc.maxsockbuf if
# the counters for "mbufs denied" or "mbufs delayed" are greater than zero(0).
# https://en.wikipedia.org/wiki/TCP_window_scale_option
# https://en.wikipedia.org/wiki/Bandwidth-delay_product
#
# speed:   1 Gbit   maxsockbuf:   2MB   wscale:  6   in-flight:  2^6*65KB =    4MB (default)
# speed:   2 Gbit   maxsockbuf:   4MB   wscale:  7   in-flight:  2^7*65KB =    8MB
# speed:  10 Gbit   maxsockbuf:  16MB   wscale:  9   in-flight:  2^9*65KB =   32MB
# speed:  40 Gbit   maxsockbuf: 150MB   wscale: 12   in-flight: 2^12*65KB =  260MB
# speed: 100 Gbit   maxsockbuf: 600MB   wscale: 14   in-flight: 2^14*65KB = 1064MB
#
# !!!!!!!!!!!! внимение - неправльное значение может уложить сервер
#kern.ipc.maxsockbuf=2097152    # (wscale  6 ; default)
#kern.ipc.maxsockbuf=4194304    # (wscale  7)
kern.ipc.maxsockbuf=16777216    # (wscale  9)
#kern.ipc.maxsockbuf=157286400  # (wscale 12)
#kern.ipc.maxsockbuf=614400000  # (wscale 14)

# TCP Buffers: Larger buffers and TCP Large Window Extensions (RFC1323) can
# help alleviate the long fat network (LFN) problem caused by insufficient
# window size; limited to 65535 bytes without RFC 1323 scaling. Verify the
# window scaling extension is enabled with net.inet.tcp.rfc1323=1, which is
# default. Both the client and server must support RFC 1323 to take advantage
# of scalable buffers. A network connection at 100Mbit/sec with a latency of 10
# milliseconds has a bandwidth-delay product of 125 kilobytes
# ((100*10^6*10*10^-3)/8=125000) which is the same BDP of a 1Gbit LAN with
# one(1) millisecond latency ((1000*10^6*1*10^-3)/8=125000 bytes). As the
# latency and/or throughput increase so does the BDP. If the connection needs
# more buffer space the kernel will dynamically increase these network buffer
# values by net.inet.tcp.sendbuf_inc and net.inet.tcp.recvbuf_inc increments.
# Use "netstat -an" to watch Recv-Q and Send-Q as the kernel increases the
# network buffer up to net.inet.tcp.recvbuf_max and net.inet.tcp.sendbuf_max .
# https://en.wikipedia.org/wiki/Bandwidth-delay_product
#
net.inet.tcp.recvbuf_inc=65536    # (default 16384)
net.inet.tcp.recvbuf_max=4194304  # (default 2097152)
net.inet.tcp.recvspace=65536      # (default 65536)
net.inet.tcp.sendbuf_inc=65536    # (default 8192)
net.inet.tcp.sendbuf_max=4194304  # (default 2097152)
net.inet.tcp.sendspace=65536      # (default 32768)

# maximum segment size (MSS) specifies the largest payload of data in a single
# IPv4 TCP segment. RFC 6691 states the maximum segment size should equal the
# effective MTU minus the fixed IP and TCP headers, but before subtracting IP
# options like TCP timestamps. Path MTU Discovery (PMTUD) is not supported by
# all internet paths and can lead to increased connection setup latency so the
# MMS can be defined manually.
#
# Option 1 - Maximum Payload - To construct the maximum MMS, start with an
# ethernet frame size of 1514 bytes and subtract 14 bytes for the ethernet
# header for an interface MTU of 1500 bytes. Then subtract 20 bytes for the IP
# header and 20 bytes for the TCP header to equal an Maximum Segment Size (MSS)
# of tcp.mssdflt=1460 bytes. With net.inet.tcp.rfc1323 enabled the packet
# payload is reduced by a further 12 bytes and the MSS is reduced from
# tcp.mssdflt=1460 bytes to a packet payload of 1448 bytes total. An MMS of
# 1448 bytes has a 95.64% packet efficiency (1448/1514=0.9564).
#
# Option 2 - No Frags - Google states the HTTP/3 QUIC (Quick UDP Internet
# Connection) IPv4 datagram should be no larger than 1280 octets to attempt to
# avoid any packet fragmentation over any Internet path. To follow Google's
# no-fragment UDP policy for TCP packets set FreeBSD's MSS to 1240 bytes. To
# construct Google's no-fragment datagram start with an ethernet frame size of
# 1294 bytes and subtract 14 bytes for the ethernet header to equal Google's
# recommended PMTU size of 1280 bytes. Then subtract 20 bytes for the IP header
# and 20 bytes for the TCP header to equal tcp.mssdflt=1240 bytes. Then, before
# the packet is sent, FreeBSD will set the TCP timestamp (rfc1323) on the
# packet reducing the true packet payload (MSS) another 12 bytes from
# tcp.mssdflt=1240 bytes to 1228 bytes which has an 94.89% packet efficiency
# (1228/1294=0.9489). https://tools.ietf.org/html/draft-ietf-quic-transport-20
#
# Broken packets: IP fragmentation is flawed
# https://blog.cloudflare.com/ip-fragmentation-is-broken/
#
# FYI: PF with an outgoing scrub rule will re-package the packet using an MTU
# of 1460 by default, thus overriding the mssdflt setting wasting CPU time and
# adding latency.
#
net.inet.tcp.mssdflt=1460   # Option 1 (default 536)
#net.inet.tcp.mssdflt=1240  # Option 2 (default 536)

# minimum, maximum segment size (mMSS) specifies the smallest payload of data
# in a single IPv4 TCP segment our system will agree to send when negotiating
# with the client. RFC 6691 states that a minimum MTU size of 576 bytes must be
# supported and the MSS option should equal the effective MTU minus the fixed
# IP and TCP headers, but without subtracting IP or TCP options. To construct
# the minimum MSS start with a frame size of 590 bytes and subtract 14 bytes
# for the ethernet header to equal the RFC 6691 recomended MTU size of 576
# bytes. Continue by subtracting 20 bytes for the IP header and 20 bytes for
# the TCP header to equal tcp.minmss=536 bytes. Then, before the packet is
# sent, FreeBSD will set the TCP timestamp (rfc1323) on the packet reducing the
# true packet payload (MSS) another 12 bytes from tcp.minmss=536 bytes to 524
# bytes which is 90.9% packet efficiency (524/576=0.909). The default mMMS is
# only 84% efficient (216/256=0.84).
#
net.inet.tcp.minmss=536  # (default 216)

# TCP Slow start gradually increases the data send rate until the TCP
# congestion algorithm (CDG, H-TCP) calculates the networks maximum carrying
# capacity without dropping packets. TCP Congestion Control with Appropriate
# Byte Counting (ABC) allows our server to increase the maximum congestion
# window exponentially by the amount of data ACKed, but limits the maximum
# increment per ACK to (abc_l_var * maxseg) bytes. An abc_l_var of 44 times a
# maxseg of 1460 bytes would allow slow start to increase the congestion window
# by more than 64 kilobytes per step; 65535 bytes is the TCP receive buffer
# size of most hosts without TCP window scaling.
#
net.inet.tcp.abc_l_var=44   # (default 2) if net.inet.tcp.mssdflt = 1460
#net.inet.tcp.abc_l_var=52  # (default 2) if net.inet.tcp.mssdflt = 1240

# Initial Congestion Window (initcwnd) limits the amount of segments TCP can
# send onto the network before receiving an ACK from the other machine.
# Increasing the TCP Initial Congestion Window will reduce data transfer
# latency during the slow start phase of a TCP connection. The initial
# congestion window should be increased to speed up short, burst connections in
# order to send the most data in the shortest time frame without overloading
# any network buffers. Google's study reported sixteen(16) segments as showing
# the lowest latency initial congestion window. Also test 44 segments which is
# 65535 bytes, the TCP receive buffer size of most hosts without TCP window
# scaling.
# https://developers.google.com/speed/pagespeed/service/tcp_initcwnd_paper.pdf
#
net.inet.tcp.initcwnd_segments=44            # (default 10 for FreeBSD 11.2) if net.inet.tcp.mssdflt = 1460
#net.inet.tcp.initcwnd_segments=52           # (default 10 for FreeBSD 11.2) if net.inet.tcp.mssdflt = 1240

# RFC 6675 increases the accuracy of TCP Fast Recovery when combined with
# Selective Acknowledgement (net.inet.tcp.sack.enable=1). TCP loss recovery is
# enhanced by computing "pipe", a sender side estimation of the number of bytes
# still outstanding on the network. Fast Recovery is augmented by sending data
# on each ACK as necessary to prevent "pipe" from falling below the slow-start
# threshold (ssthresh). The TCP window size and SACK-based decisions are still
# determined by the congestion control algorithm; CDG, CUBIC or H-TCP if
# enabled, newreno by default.
#
net.inet.tcp.rfc6675_pipe=1  # (default 0)

# Reduce the amount of SYN/ACKs the server will re-transmit to an ip address
# whom did not respond to the first SYN/ACK. On a client's initial connection
# our server will always send a SYN/ACK in response to the client's initial
# SYN. Limiting retranstited SYN/ACKS reduces local syn cache size and a "SYN
# flood" DoS attack's collateral damage by not sending SYN/ACKs back to spoofed
# ips, multiple times. If we do continue to send SYN/ACKs to spoofed IPs they
# may send RST's back to us and an "amplification" attack would begin against
# our host. If you do not wish to send retransmits at all then set to zero(0)
# especially if you are under a SYN attack. If our first SYN/ACK gets dropped
# the client will re-send another SYN if they still want to connect. Also set
# "net.inet.tcp.msl" to two(2) times the average round trip time of a client,
# but no lower then 2000ms (2s). Test with "netstat -s -p tcp" and look under
# syncache entries. http://www.ouah.org/spank.txt
# https://people.freebsd.org/~jlemon/papers/syncache.pdf
#
net.inet.tcp.syncache.rexmtlimit=0  # (default 3)

# IP fragments require CPU processing time and system memory to reassemble. Due
# to multiple attacks vectors ip fragmentation can contribute to and that
# fragmentation can be used to evade packet inspection and auditing, we will
# not accept IPv4 or IPv6 fragments. Comment out these directives when
# supporting traffic which generates fragments by design; like NFS and certain
# preternatural functions of the Sony PS4 gaming console.
# https://en.wikipedia.org/wiki/IP_fragmentation_attack
# https://www.freebsd.org/security/advisories/FreeBSD-SA-18:10.ip.asc
#
net.inet.ip.maxfragpackets=0     # (default 63474)
net.inet.ip.maxfragsperpacket=0  # (default 16)

# Syncookies have advantages and disadvantages. Syncookies are useful if you
# are being DoS attacked as this method helps filter the proper clients from
# the attack machines. But, since the TCP options from the initial SYN are not
# saved in syncookies, the tcp options are not applied to the connection,
# precluding use of features like window scale, timestamps, or exact MSS
# sizing. As the returning ACK establishes the connection, it may be possible
# for an attacker to ACK flood a machine in an attempt to create a connection.
# Another benefit to overflowing to the point of getting a valid SYN cookie is
# the attacker can include data payload. Now that the attacker can send data to
# a FreeBSD network daemon, even using a spoofed source IP address, they can
# have FreeBSD do processing on the data which is not something the attacker
# could do without having SYN cookies. Even though syncookies are helpful
# during a DoS, we are going to disable syncookies at this time.
#
net.inet.tcp.syncookies=0  # (default 1)

# RFC 6528 Initial Sequence Numbers (ISN) refer to the unique 32-bit sequence
# number assigned to each new Transmission Control Protocol (TCP) connection.
# The TCP protocol assigns an ISN to each new byte, beginning with 0 and
# incrementally adding a secret number every four seconds until the limit is
# exhausted. In continuous communication all available ISN options could be
# used up in a few hours. Normally a new secret number is only chosen after the
# ISN limit has been exceeded. In order to defend against Sequence Number
# Attacks the ISN secret key should not be used sufficiently often that it
# would be regarded as predictable, and thus insecure. Reseeding the ISN will
# break TIME_WAIT recycling for a few minutes. BUT, for the more paranoid,
# simply choose a random number of seconds in which a new ISN secret should be
# generated.  https://tools.ietf.org/html/rfc6528
#
net.inet.tcp.isn_reseed_interval=4500  # (default 0, disabled)

# TCP segmentation offload (TSO), also called large segment offload (LSO),
# should be disabled on NAT firewalls and routers. TSO/LSO works by queuing up
# large 64KB buffers and letting the network interface card (NIC) split them
# into separate packets. The problem is the NIC can build a packet that is the
# wrong size and would be dropped by a switch or the receiving machine, like
# for NFS fragmented traffic. If the packet is dropped the overall sending
# bandwidth is reduced significantly. You can also disable TSO in /etc/rc.conf
# using the "-tso" directive after the network card configuration; for example,
# ifconfig_igb0="inet 10.10.10.1 netmask 255.255.255.0 -tso". Verify TSO is off
# on the hardware by making sure TSO4 and TSO6 are not seen in the "options="
# section using ifconfig.
# http://www.peerwisdom.org/2013/04/03/large-send-offload-and-network-performance/
#
net.inet.tcp.tso=0  # (default 1)

# Intel i350-T2 igb(4): flow control manages the rate of data transmission
# between two nodes preventing a fast sender from overwhelming a slow receiver.
# Ethernet "PAUSE" frames will pause transmission of all traffic types on a
# physical link, not just the individual flow causing the problem. By disabling
# physical link flow control the link instead relies on native TCP or QUIC UDP
# internal congestion control which is peer based on IP address and more fair
# to each flow. The options are: (0=No Flow Control) (1=Receive Pause)
# (2=Transmit Pause) (3=Full Flow Control, Default). A value of zero(0)
# disables ethernet flow control on the Intel igb(4) interface.
# http://virtualthreads.blogspot.com/2006/02/beware-ethernet-flow-control.html
#
dev.igb.0.fc=0  # (default 3)

# Intel i350-T2 igb(4): the rx_budget sets the maximum number of receive
# packets to process in an interrupt. If the budget is reached, the
# remaining/pending packets will be processed later in a scheduled taskqueue.
# The default of zero(0) indicates a FreeBSD 12 default of sixteen(16) frames
# can be accepted at a time which is less than 24 kilobytes. If the server is
# not CPU limited and also receiving an agglomeration of QUIC HTTP/3 UDP
# packets, we advise increasing the budget to a maximum of 65535 packets. "man
# iflib" for more information.
#
dev.igb.0.iflib.rx_budget=65535  # (default 0, which is 16 frames)
dev.igb.1.iflib.rx_budget=65535  # (default 0, which is 16 frames)

# Fortuna pseudorandom number generator (PRNG) maximum event size is also
# referred to as the minimum pool size. Fortuna has a main generator which
# supplies the OS with PRNG data. The Fortuna generator is seeded by 32
# separate 'Fortuna' accumulation pools which each have to be filled with at
# least 'minpoolsize' bytes before being able to seed the OS with random bits.
# On FreeBSD, the default 'minpoolsize' of 64 bytes is an estimate of the
# minimum amount of bytes a new pool should contain to provide at least 128
# bits of entropy. After a pool is used in a generator reseed, that pool is
# reset to an empty string and must reach 'minpoolsize' bytes again before
# being used as a seed. Increasing the 'minpoolsize' allows higher entropy into
# the accumulation pools before being assimilated by the generator.
#
# The Fortuna authors state 64 bytes is safe enough even if an attacker
# influences some random source data. To be a bit more paranoid, we increase
# the 'minpoolsize' to 128 bytes so each pool will provide an absolute minimum
# of 256 bits of entropy, but realistically closer to 1024 bits of entropy, for
# each of the 32 Fortuna accumulation pools. Values of 128 bytes and 256 bytes
# are reasonable when coupled with a dedicated hardware based PRNG like the
# fast source Intel Secure Key RNG (PURE_RDRAND). Do not make the pool value
# too large as this will delay the reseed even if very good random sources are
# available. https://www.schneier.com/academic/paperfiles/fortuna.pdf
#
# FYI: on FreeBSD 11, values over 64 can incur additional reboot time to
# populate the pools during the "Feeding entropy:" boot stage. For example, a
# pool size value of 256 can add an additional 90 seconds to boot the machine.
# FreeBSD 12 has been patched to not incur the boot delay issue with larger
# pool values.
#
kern.random.fortuna.minpoolsize=128  # (default 64)

# Entropy is the amount of order, disorder or chaos observed in a system which
# can be observed by FreeBSD and fed though Fortuna to the accumulation pools.
# Setting the harvest.mask to 67583 allows the OS to harvest entropy from any
# source including peripherals, network traffic, the universal memory allocator
# (UMA) and interrupts (SWI), but be warned, setting the harvest mask to 67583
# will limit network throughput to less than a gigabit even on modern hardware.
# When running a ten(10) gigabit network with more than four(4) real CPU cores
# and more than four(4) network card queues it is recommended to reduce the
# harvest mask to 65887 to ignore UMA. FS_ATIME, INTERRUPT and NET_ETHER
# entropy sources in order to achieve peak packets per second (PPS). By
# default, Fortuna will use a CPU's 'Intel Secure Key RNG' if available in
# hardware (PURE_RDRAND). Use "sysctl kern.random.harvest" to check the
# symbolic entropy sources being polled; disabled items are listed in square
# brackets. A harvest mask of 65887 is only around four(4%) more efficient than
# the default mask of 66047 at the maximum packets per second of the interface.
#
#kern.random.harvest.mask=351   # (default 511, FreeBSD 11 and 12 without Intel Secure Key RNG)
kern.random.harvest.mask=65887  # (default 66047, FreeBSD 12 with Intel Secure Key RNG)

# Increase the localhost buffer space as well as the maximum incoming and
# outgoing raw IP datagram size to 16384 bytes (2^14 bytes) which is the same
# as the MTU for the localhost interface, "ifconfig lo0". The larger buffer
# space should allow services which listen on localhost, like web or database
# servers, to more efficiently move data to the network buffers.
net.inet.raw.maxdgram=16384       # (default 9216)
net.inet.raw.recvspace=16384      # (default 9216)
net.local.stream.sendspace=16384  # (default 8192)
net.local.stream.recvspace=16384  # (default 8192)

# The TCPT_REXMT timer is used to force retransmissions. TCP has the
# TCPT_REXMT timer set whenever segments have been sent for which ACKs are
# expected, but not yet received. If an ACK is received which advances
# tp->snd_una, then the retransmit timer is cleared (if there are no more
# outstanding segments) or reset to the base value (if there are more ACKs
# expected). Whenever the retransmit timer goes off, we retransmit one
# unacknowledged segment, and do a backoff on the retransmit timer.
# net.inet.tcp.persmax=60000 # (default 60000)
# net.inet.tcp.persmin=5000  # (default 5000)

# Drop TCP options from 3rd and later retransmitted SYN
# net.inet.tcp.rexmit_drop_options=0  # (default 0)

# Enable tcp_drain routine for extra help when low on mbufs
# net.inet.tcp.do_tcpdrain=1 # (default 1)

# Myricom mxge(4): the maximum number of slices the driver will attempt to
# enable if enough system resources are available at boot. A slice is comprised
# of a set of receive queues and an associated interrupt thread. Multiple
# slices should be used when the network traffic is being limited by the
# processing speed of a single CPU core. When using multiple slices, the NIC
# hashes traffic to different slices based on the value of
# hw.mxge.rss_hashtype. Using multiple slices requires that your motherboard
# and Myri10GE NIC both be capable of MSI-X. The maximum number of slices
# is limited to the number of real CPU cores divided by the number of mxge
# network ports.
#hw.mxge.max_slices="1"  # (default 1, which uses a single cpu core)

# Myricom mxge(4): when multiple slices are enabled, the hash type determines
# how incoming traffic is steered to each slice. A slice is comprised of a set
# of receive queues and an associated interrupt thread. Hashing is disabled
# when using a single slice (hw.mxge.max_slices=1). The options are: ="1"
# hashes on the source and destination IPv4 addresses. ="2" hashes on the
# source and destination IPv4 addresses and also TCP source and destination
# ports. ="4" is the default and hashes on the TCP or UDP source ports. A value
# to "4" will more evenly distribute the flows over the slices. A value of "1"
# will lock client source ips to a single slice.
#hw.mxge.rss_hash_type="4"  # (default 4)

# Myricom mxge(4): flow control manages the rate of data transmission between
# two nodes preventing a fast sender from overwhelming a slow receiver.
# Ethernet "PAUSE" frames pause transmission of all traffic on a physical link,
# not just the individual flow causing the problem. By disabling physical link
# flow control the link instead relies on TCP's internal flow control which is
# peer based on IP address and more fair to each flow. The mxge options are:
# (0=No Flow Control) (1=Full Flow Control, Default). A value of zero(0)
# disables ethernet flow control on the Myricom mxge(4) interface.
# http://virtualthreads.blogspot.com/2006/02/beware-ethernet-flow-control.html
#hw.mxge.flow_control_enabled=0  # (default 1, enabled)

# The number of frames the NIC's receive (rx) queue will accept before
# triggering a kernel inturrupt. If the NIC's queue is full and the kernel can
# not process the packets fast enough then the packets are dropped. Use "sysctl
# net.inet.ip.intr_queue_drops" and "netstat -Q" and increase the queue_maxlen
# if queue_drops is greater then zero(0). The real problem is the CPU or NIC is
# not fast enough to handle the traffic, but if you are already at the limit of
# your network then increasing these values will help.
#net.inet.ip.intr_queue_maxlen=2048  # (default 256)
net.route.netisr_maxqlen=2048       # (default 256)

# Intel igb(4): freebsd limits the the number of received packets a network
# card can process to 100 packets per interrupt cycle. This limit is in place
# because of inefficiencies in IRQ sharing when the network card is using the
# same IRQ as another device. When the Intel network card is assigned a unique
# IRQ (dmesg) and MSI-X is enabled through the driver (hw.igb.enable_msix=1)
# then interrupt scheduling is significantly more efficient and the NIC can be
# allowed to process packets as fast as they are received. A value of "-1"
# means unlimited packet processing. There is no need to set these options if
# hw.igb.rx_process_limit is already defined.
#dev.igb.0.rx_processing_limit=-1  # (default 100)
#dev.igb.1.rx_processing_limit=-1  # (default 100)

# Intel igb(4): Energy-Efficient Ethernet (EEE) is intended to reduce system
# power consumption up to 80% by setting the interface to a low power mode
# during periods of network inactivity. When the NIC is in low power mode this
# allows the CPU longer periods of time to also go into a sleep state thus
# lowering overall power usage. The problem is EEE can cause periodic packet
# loss and latency spikes when the interface transitions from low power mode.
# Packet loss from EEE will not show up in the missed_packets or dropped
# counter because the packet was not dropped, but lost by the network card
# during the transition phase. The Intel i350-T2 only requires 4.4 watts with
# both network ports active so we recommend disabling EEE especially on a
# server unless power usage is of higher priority. Verify DMA Coalesce is
# disabled (dev.igb.0.dmac=0) which is the default. WARNING: enabling EEE will
# significantly delay DHCP leases and the network interface will flip a few
# times on boot. https://en.wikipedia.org/wiki/Energy-Efficient_Ethernet
#dev.igb.0.eee_disabled=1  # (default 0, enabled)
#dev.igb.1.eee_disabled=1  # (default 0, enabled)

# Current CPU can manage a lot's more of interrupts than default (1000)
# The 9000 value was found in /usr/src/sys/dev/ixgbe/README
hw.intr_storm_threshold=9000

#Power save: Disable power for device with no driver loaded
hw.pci.do_power_nodriver=3
# ICMP reply from incoming interface for non-local packets
net.inet.icmp.reply_from_interface=1

# General Security and DoS mitigation
net.bpf.optimize_writers=1           # bpf are write-only unless program explicitly specifies the read filter (default 0)
#net.bpf.zerocopy_enable=0            # zero-copy BPF buffers, breaks dhcpd ! (default 0)
net.inet.ip.check_interface=1         # verify packet arrives on correct interface (default 0)
#net.inet.ip.portrange.randomized=1   # randomize outgoing upper ports (default 1)
net.inet.ip.process_options=0         # ignore IP options in the incoming packets (default 1)
net.inet.ip.random_id=1               # assign a random IP_ID to each packet leaving the system (default 0)
net.inet.ip.redirect=0                # do not send IP redirects (default 1)
#net.inet.ip.accept_sourceroute=0     # drop source routed packets since they can not be trusted (default 0)
#net.inet.ip.sourceroute=0            # if source routed packets are accepted the route data is ignored (default 0)
#net.inet.ip.stealth=1                # do not reduce the TTL by one(1) when a packets goes through the firewall (default 0)
#net.inet.icmp.bmcastecho=0           # do not respond to ICMP packets sent to IP broadcast addresses (default 0)
#net.inet.icmp.maskfake=0             # do not fake reply to ICMP Address Mask Request packets (default 0)
#net.inet.icmp.maskrepl=0             # replies are not sent for ICMP address mask requests (default 0)
#net.inet.icmp.log_redirect=0         # do not log redirected ICMP packet attempts (default 0)
net.inet.icmp.drop_redirect=1         # no redirected ICMP packets (default 0)
#net.inet.icmp.icmplim=200            # number of ICMP/TCP RST packets/sec, increase for bittorrent or many clients. (default 200)
#net.inet.icmp.icmplim_output=1       # show "Limiting open port RST response" messages (default 1)
#net.inet.tcp.abc_l_var=2             # increment the slow-start Congestion Window (cwnd) after two(2) segments (default 2)
net.inet.tcp.always_keepalive=0       # disable tcp keep alive detection for dead peers, keepalive can be spoofed (default 1)
net.inet.tcp.drop_synfin=1            # SYN/FIN packets get dropped on initial connection (default 0)
net.inet.tcp.ecn.enable=1             # explicit congestion notification (ecn) warning: some ISP routers may abuse ECN (default 0)
net.inet.tcp.fast_finwait2_recycle=1  # recycle FIN/WAIT states quickly (helps against DoS, but may cause false RST) (default 0)
net.inet.tcp.icmp_may_rst=0           # icmp may not send RST to avoid spoofed icmp/udp floods (default 1)
#net.inet.tcp.maxtcptw=50000          # max number of tcp time_wait states for closing connections (default ~27767)
net.inet.tcp.msl=5000                 # Maximum Segment Lifetime is the time a TCP segment can exist on the network and is
                                      # used to determine the TIME_WAIT interval, 2*MSL (default 30000 which is 60 seconds)
net.inet.tcp.path_mtu_discovery=0     # disable MTU discovery since many hosts drop ICMP type 3 packets (default 1)
#net.inet.tcp.rfc3042=1               # on packet loss trigger the fast retransmit algorithm instead of tcp timeout (default 1)
net.inet.udp.blackhole=1              # drop udp packets destined for closed sockets (default 0)
net.inet.tcp.blackhole=2              # drop tcp packets destined for closed ports (default 0)
cat /etc/rc.conf | grep -e ifconfig_ix[0-1]

ifconfig_ix0="up -rxcsum -txcsum -tso4 -tso6 -lro -vlanhwtso -vlanhwtag -vlanhwfilter description -=INTERFACE-TO-CORE-SWITCH=-"
ifconfig_ix1="up -rxcsum -txcsum -tso4 -tso6 -lro -vlanhwtso -vlanhwtag -vlanhwfilter description -=CORE-SWITCH=-"
ipfw show

00100       7914802        880058842 allow ip from any to any via lo0
00200             0                0 deny ip from any to 127.0.0.0/8
00300             0                0 deny ip from 127.0.0.0/8 to any
01100             0                0 deny ip from not me to 10.0.50.0/24 out xmit vlan300
01101             0                0 deny ip from 10.0.50.0/24 to not me in recv vlan300
01300        796298         41238560 deny tcp from not me to any 25 out xmit vlan998
02010             0                0 allow udp from 0.0.0.0 20561 to 255.255.255.255 in recv table(USERS_VLAN)
02040    3763624719     300665541296 allow ip from any to { table(ALLOW_IP) or me } in recv table(USERS_VLAN)
02041    5234871982     638932581223 allow ip from { table(ALLOW_IP) or me } to any out xmit table(USERS_VLAN)
04000    1999003789     838624501100 skipto 30100 tcp from table(USERS_NAT) to table(PAYSYSTEMS) 80,443 in recv table(USERS_VLAN)
04001    2208731646    1553147142002 skipto 30200 tcp from table(PAYSYSTEMS) 80,443 to table(USERS_NAT) out xmit table(USERS_VLAN)
05000      31815632       4431518180 fwd 10.31.31.101 tcp from 172.31.0.0/16 to not table(ALLOW_IP) 80 in recv table(USERS_VLAN)
05010      25988580      19352798938 allow tcp from not me 80 to 172.31.0.0/16 out xmit table(USERS_VLAN)
06000     711416818      51200880709 deny ip from { table(0) or not table(50) } to not me in recv table(USERS_VLAN)
06010     125985439       6429835698 deny ip from not me to { table(0) or not table(51) } out xmit table(USERS_VLAN)
30100  633012123638  168286861539984 nat tablearg ip from table(USERS_NAT) to not table(ALLOW_IP) out xmit vlan998
30200 1530752004794 2013678396459354 nat tablearg ip from not table(ALLOW_IP) to table(REALIP_NAT) in recv vlan998
65535 3579337678544 3536650872365449 allow ip from any to any

 

Для сервера с dummynet отличающиеся файлы:

 

less /boot/loader.conf

dev.ix.0.iflib.core_offset=1
dev.ix.1.iflib.core_offset=1

dev.ix.0.iflib.override_nrxqs=7
dev.ix.0.iflib.override_ntxqs=7

dev.ix.1.iflib.override_nrxqs=7
dev.ix.1.iflib.override_ntxqs=7

# BSDRP
hw.em.rx_process_limit="-1"
hw.igb.rx_process_limit="-1"
hw.ix.rx_process_limit="-1"

# Allow unsupported SFP
hw.ix.unsupported_sfp="1"
hw.ix.allow_unsupported_sfp="1"

# https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards#TSO.2FLRO
hw.ix.flow_control=0

# H-TCP Congestion Control for a more aggressive increase in speed on higher
# latency, high bandwidth networks with some packet loss.
#cc_htcp_load="YES"

net.link.ifqmaxlen="16384"  # (default 50)

# qlimit for igmp, arp, ether and ip6 queues only (netstat -Q) (default 256)
net.isr.defaultqlimit="4096" # (default 256)

# increase the number of network mbufs the system is willing to allocate.  Each
# cluster represents approximately 2K of memory, so a value of 524288
# represents 1GB of kernel memory reserved for network buffers. (default
# 492680)
kern.ipc.nmbclusters="5242880"
kern.ipc.nmbjumbop="2621440"


# Size of the syncache hash table, must be a power of 2 (default 512)
net.inet.tcp.syncache.hashsize="1024"

# Limit the number of entries permitted in each bucket of the hash table. (default 30)
net.inet.tcp.syncache.bucketlimit="100"

# limit per-workstream queues (use "netstat -Q" if Qdrop is greater then 0
# increase this directive) (default 10240)
net.isr.maxqlimit="1000000"
less /usr/local/etc/rc.d/cpuset-dummynet
#!/bin/sh
# PROVIDE: cpuset-dummynet
# REQUIRE: FILESYSTEMS
# BEFORE:  netif
# KEYWORD: nojail
/usr/bin/cpuset -l 0 -t $(procstat -t 0 | /usr/bin/awk '/dummynet/ {print $2}')
ipfw show

00100      7301952       799721278 allow ip from any to any via lo0
00200            0               0 deny ip from any to 127.0.0.0/8
00300            0               0 deny ip from 127.0.0.0/8 to any
01300       619087        36412252 deny tcp from not me to any 25 out xmit vlan998
01310         3249          271174 deny ip from table(DENY_IP) to any in recv vlan998
01311     22474455      1348485560 deny ip from any to table(DENY_IP) out xmit vlan998
02010            0               0 allow udp from 0.0.0.0 20561 to 255.255.255.255 in recv table(USERS_VLAN)
02040   1604609344    143551782453 allow ip from any to { table(ALLOW_IP) or me } in recv table(USERS_VLAN)
02041   1816999815    197994921810 allow ip from { table(ALLOW_IP) or me } to any out xmit table(USERS_VLAN)
04000    904889973    372143814795 skipto 30100 tcp from table(USERS_NAT) to table(PAYSYSTEMS) 80,443 in recv table(USERS_VLAN)
04001    983011289    714341053758 skipto 30200 tcp from table(PAYSYSTEMS) 80,443 to table(USERS_NAT) out xmit table(USERS_VLAN)
05000        74345         9366675 fwd 10.31.31.101,26513 tcp from 172.31.0.0/16 to not table(ALLOW_IP) 80 in recv table(USERS_VLAN)
05010        71843        42225547 allow tcp from not me 80 to 172.31.0.0/16 out xmit table(USERS_VLAN)
06000    334080578     27129339965 deny ip from { table(0) or not table(50) } to not me in recv table(USERS_VLAN)
06010    221351653     15300095622 deny ip from not me to { table(0) or not table(51) } out xmit table(USERS_VLAN)
30000 257411270192  51167909907089 pipe tablearg ip from table(50) to any in recv vlan*
30004 522369117361 672065962540723 pipe tablearg ip from any to table(51) out xmit vlan*
30100 245366462925  39179861277335 nat tablearg ip from table(USERS_NAT) to not table(ALLOW_IP) out xmit vlan998
30200 513354241457 665870772340077 nat tablearg ip from not table(ALLOW_IP) to table(REALIP_NAT) in recv vlan998
65535  83965478411  60289214411412 allow ip from any to any

 

  • Like 3
  • Thanks 5
Ссылка на сообщение
Поделиться на других сайтах
  • 1 month later...

Проблема  -  рандомний абонент десь раз у тиждень перестає пінгатися з сервера 10G

Абонент каже то є то нема інтету по настрою, або повністю тухне, як кому повезе.

В момент  коли інет у абонента не працює

arping працює.

Пробував ресет роутера, ресет в білінгу, заміну ОНУ, arp -d IP_абонента - не помагає.

Шейпер є адекватно призначений.

Що ще можна глянути?

 

Переводжу абонента на інший NAS,  там все ок.

 

NAS rscript на freeBSD 12.1

10G мережева,  тюнінг від Pautiina, вілани у бріджі.

~ 1000 абонентів і 1,6G жує трафіку, у піку ще гіг дає, більше не пробував.

 

Ссылка на сообщение
Поделиться на других сайтах
4 часа назад, mgo сказал:

Проблема  -  рандомний абонент десь раз у тиждень перестає пінгатися з сервера 10G

Абонент каже то є то нема інтету по настрою, або повністю тухне, як кому повезе.

В момент  коли інет у абонента не працює

arping працює.

Пробував ресет роутера, ресет в білінгу, заміну ОНУ, arp -d IP_абонента - не помагає.

Шейпер є адекватно призначений.

Що ще можна глянути?

 

Переводжу абонента на інший NAS,  там все ок.

 

NAS rscript на freeBSD 12.1

10G мережева,  тюнінг від Pautiina, вілани у бріджі.

~ 1000 абонентів і 1,6G жує трафіку, у піку ще гіг дає, більше не пробував.

 

Вот я в бриджы не загоняю. Попробуйте сначала удалить блокирующие правила фаервола

Ссылка на сообщение
Поделиться на других сайтах
19 часов назад, Pautiina сказав:

Вот я в бриджы не загоняю. Попробуйте сначала удалить блокирующие правила фаервола

Не думаю что тут бриджы виновны.

Ссылка на сообщение
Поделиться на других сайтах
  • 1 month later...
В 18.03.2021 в 20:15, mgo сказав:

Проблема  -  рандомний абонент десь раз у тиждень перестає пінгатися з сервера 10G

Абонент каже то є то нема інтету по настрою, або повністю тухне, як кому повезе.

В момент  коли інет у абонента не працює

arping працює.

Пробував ресет роутера, ресет в білінгу, заміну ОНУ, arp -d IP_абонента - не помагає.

Шейпер є адекватно призначений.

Що ще можна глянути?

 

Переводжу абонента на інший NAS,  там все ок.

 

NAS rscript на freeBSD 12.1

10G мережева,  тюнінг від Pautiina, вілани у бріджі.

~ 1000 абонентів і 1,6G жує трафіку, у піку ще гіг дає, більше не пробував.

 

бувало що rscript по кругу молотив, включав-виключав абонів, подивіться в 47 табличку коли пропадає у абона

Ссылка на сообщение
Поделиться на других сайтах
  • 6 months later...

Подыму тему, объясните, что в этом:

В 22.01.2021 в 21:13, Pautiina сказал:

ifconfig_ix0="up -rxcsum -txcsum -tso4 -tso6 -lro -vlanhwtso -vlanhwtag -vlanhwfilter description -=INTERFACE-TO-CORE-SWITCH=-"

ifconfig_ix1="up -rxcsum -txcsum -tso4 -tso6 -lro -vlanhwtso -vlanhwtag -vlanhwfilter description -=CORE-SWITCH=-"

"делают" записи:

-=INTERFACE-TO-CORE-SWITCH=-

-=CORE-SWITCH=-

 

 

Ссылка на сообщение
Поделиться на других сайтах

Создайте аккаунт или войдите в него для комментирования

Вы должны быть пользователем, чтобы оставить комментарий

Создать аккаунт

Зарегистрируйтесь для получения аккаунта. Это просто!

Зарегистрировать аккаунт

Вхід

Уже зарегистрированы? Войдите здесь.

Войти сейчас
  • Зараз на сторінці   0 користувачів

    Немає користувачів, що переглядають цю сторінку.


×
×
  • Створити нове...