1
0
Fork 0

Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next

Pablo Neira Ayuso says:

====================
Netfilter/IPVS updates for net-next

The following patchset contains Netfilter updates for your net-next
tree. A large bunch of code cleanups, simplify the conntrack extension
codebase, get rid of the fake conntrack object, speed up netns by
selective synchronize_net() calls. More specifically, they are:

1) Check for ct->status bit instead of using nfct_nat() from IPVS and
   Netfilter codebase, patch from Florian Westphal.

2) Use kcalloc() wherever possible in the IPVS code, from Varsha Rao.

3) Simplify FTP IPVS helper module registration path, from Arushi Singhal.

4) Introduce nft_is_base_chain() helper function.

5) Enforce expectation limit from userspace conntrack helper,
   from Gao Feng.

6) Add nf_ct_remove_expect() helper function, from Gao Feng.

7) NAT mangle helper function return boolean, from Gao Feng.

8) ctnetlink_alloc_expect() should only work for conntrack with
   helpers, from Gao Feng.

9) Add nfnl_msg_type() helper function to nfnetlink to build the
   netlink message type.

10) Get rid of unnecessary cast on void, from simran singhal.

11) Use seq_puts()/seq_putc() instead of seq_printf() where possible,
    also from simran singhal.

12) Use list_prev_entry() from nf_tables, from simran signhal.

13) Remove unnecessary & on pointer function in the Netfilter and IPVS
    code.

14) Remove obsolete comment on set of rules per CPU in ip6_tables,
    no longer true. From Arushi Singhal.

15) Remove duplicated nf_conntrack_l4proto_udplite4, from Gao Feng.

16) Remove unnecessary nested rcu_read_lock() in
    __nf_nat_decode_session(). Code running from hooks are already
    guaranteed to run under RCU read side.

17) Remove deadcode in nf_tables_getobj(), from Aaron Conole.

18) Remove double assignment in nf_ct_l4proto_pernet_unregister_one(),
    also from Aaron.

19) Get rid of unsed __ip_set_get_netlink(), from Aaron Conole.

20) Don't propagate NF_DROP error to userspace via ctnetlink in
    __nf_nat_alloc_null_binding() function, from Gao Feng.

21) Revisit nf_ct_deliver_cached_events() to remove unnecessary checks,
    from Gao Feng.

22) Kill the fake untracked conntrack objects, use ctinfo instead to
    annotate a conntrack object is untracked, from Florian Westphal.

23) Remove nf_ct_is_untracked(), now obsolete since we have no
    conntrack template anymore, from Florian.

24) Add event mask support to nft_ct, also from Florian.

25) Move nf_conn_help structure to
    include/net/netfilter/nf_conntrack_helper.h.

26) Add a fixed 32 bytes scratchpad area for conntrack helpers.
    Thus, we don't deal with variable conntrack extensions anymore.
    Make sure userspace conntrack helper doesn't go over that size.
    Remove variable size ct extension infrastructure now this code
    got no more clients. From Florian Westphal.

27) Restore offset and length of nf_ct_ext structure to 8 bytes now
    that wraparound is not possible any longer, also from Florian.

28) Allow to get rid of unassured flows under stress in conntrack,
    this applies to DCCP, SCTP and TCP protocols, from Florian.

29) Shrink size of nf_conntrack_ecache structure, from Florian.

30) Use TCP_MAX_WSCALE instead of hardcoded 14 in TCP tracker,
    from Gao Feng.

31) Register SYNPROXY hooks on demand, from Florian Westphal.

32) Use pernet hook whenever possible, instead of global hook
    registration, from Florian Westphal.

33) Pass hook structure to ebt_register_table() to consolidate some
    infrastructure code, from Florian Westphal.

34) Use consume_skb() and return NF_STOLEN, instead of NF_DROP in the
    SYNPROXY code, to make sure device stats are not fooled, patch
    from Gao Feng.

35) Remove NF_CT_EXT_F_PREALLOC this kills quite some code that we
    don't need anymore if we just select a fixed size instead of
    expensive runtime time calculation of this. From Florian.

36) Constify nf_ct_extend_register() and nf_ct_extend_unregister(),
    from Florian.

37) Simplify nf_ct_ext_add(), this kills nf_ct_ext_create(), from
    Florian.

38) Attach NAT extension on-demand from masquerade and pptp helper
    path, from Florian.

39) Get rid of useless ip_vs_set_state_timeout(), from Aaron Conole.

40) Speed up netns by selective calls of synchronize_net(), from
    Florian Westphal.

41) Silence stack size warning gcc in 32-bit arch in snmp helper,
    from Florian.

42) Inconditionally call nf_ct_ext_destroy(), even if we have no
    extensions, to deal with the NF_NAT_MANIP_SRC case. Patch from
    Liping Zhang.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
zero-colors
David S. Miller 2017-05-01 10:46:50 -04:00
commit a01aa920b8
113 changed files with 866 additions and 849 deletions

View File

@ -41,6 +41,11 @@ int nfnetlink_set_err(struct net *net, u32 portid, u32 group, int error);
int nfnetlink_unicast(struct sk_buff *skb, struct net *net, u32 portid,
int flags);
static inline u16 nfnl_msg_type(u8 subsys, u8 msg_type)
{
return subsys << 8 | msg_type;
}
void nfnl_lock(__u8 subsys_id);
void nfnl_unlock(__u8 subsys_id);
#ifdef CONFIG_PROVE_LOCKING

View File

@ -109,8 +109,10 @@ struct ebt_table {
#define EBT_ALIGN(s) (((s) + (__alignof__(struct _xt_align)-1)) & \
~(__alignof__(struct _xt_align)-1))
extern struct ebt_table *ebt_register_table(struct net *net,
const struct ebt_table *table);
extern void ebt_unregister_table(struct net *net, struct ebt_table *table);
const struct ebt_table *table,
const struct nf_hook_ops *);
extern void ebt_unregister_table(struct net *net, struct ebt_table *table,
const struct nf_hook_ops *);
extern unsigned int ebt_do_table(struct sk_buff *skb,
const struct nf_hook_state *state,
struct ebt_table *table);

View File

@ -1349,8 +1349,6 @@ int ip_vs_protocol_init(void);
void ip_vs_protocol_cleanup(void);
void ip_vs_protocol_timeout_change(struct netns_ipvs *ipvs, int flags);
int *ip_vs_create_timeout_table(int *table, int size);
int ip_vs_set_state_timeout(int *table, int num, const char *const *names,
const char *name, int to);
void ip_vs_tcpudp_debug_packet(int af, struct ip_vs_protocol *pp,
const struct sk_buff *skb, int offset,
const char *msg);
@ -1555,13 +1553,9 @@ static inline void ip_vs_notrack(struct sk_buff *skb)
enum ip_conntrack_info ctinfo;
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
if (!ct || !nf_ct_is_untracked(ct)) {
struct nf_conn *untracked;
if (ct) {
nf_conntrack_put(&ct->ct_general);
untracked = nf_ct_untracked_get();
nf_conntrack_get(&untracked->ct_general);
nf_ct_set(skb, untracked, IP_CT_NEW);
nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
}
#endif
}
@ -1620,7 +1614,7 @@ static inline bool ip_vs_conn_uses_conntrack(struct ip_vs_conn *cp,
if (!(cp->flags & IP_VS_CONN_F_NFCT))
return false;
ct = nf_ct_get(skb, &ctinfo);
if (ct && !nf_ct_is_untracked(ct))
if (ct)
return true;
#endif
return false;

View File

@ -14,7 +14,6 @@ extern struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp4;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udp4;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite4;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_icmp;
#ifdef CONFIG_NF_CT_PROTO_DCCP
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_dccp4;

View File

@ -5,7 +5,6 @@ extern struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv6;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp6;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udp6;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_icmpv6;
#ifdef CONFIG_NF_CT_PROTO_DCCP
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_dccp6;

View File

@ -50,25 +50,6 @@ union nf_conntrack_expect_proto {
#define NF_CT_ASSERT(x)
#endif
struct nf_conntrack_helper;
/* Must be kept in sync with the classes defined by helpers */
#define NF_CT_MAX_EXPECT_CLASSES 4
/* nf_conn feature for connections that have a helper */
struct nf_conn_help {
/* Helper. if any */
struct nf_conntrack_helper __rcu *helper;
struct hlist_head expectations;
/* Current number of expected connections */
u8 expecting[NF_CT_MAX_EXPECT_CLASSES];
/* private helper information. */
char data[];
};
#include <net/netfilter/ipv4/nf_conntrack_ipv4.h>
#include <net/netfilter/ipv6/nf_conntrack_ipv6.h>
@ -243,14 +224,6 @@ extern s32 (*nf_ct_nat_offset)(const struct nf_conn *ct,
enum ip_conntrack_dir dir,
u32 seq);
/* Fake conntrack entry for untracked connections */
DECLARE_PER_CPU_ALIGNED(struct nf_conn, nf_conntrack_untracked);
static inline struct nf_conn *nf_ct_untracked_get(void)
{
return raw_cpu_ptr(&nf_conntrack_untracked);
}
void nf_ct_untracked_status_or(unsigned long bits);
/* Iterate over all conntracks: if iter returns true, it's deleted. */
void nf_ct_iterate_cleanup(struct net *net,
int (*iter)(struct nf_conn *i, void *data),
@ -281,11 +254,6 @@ static inline int nf_ct_is_dying(const struct nf_conn *ct)
return test_bit(IPS_DYING_BIT, &ct->status);
}
static inline int nf_ct_is_untracked(const struct nf_conn *ct)
{
return test_bit(IPS_UNTRACKED_BIT, &ct->status);
}
/* Packet is received from loopback */
static inline bool nf_is_loopback_packet(const struct sk_buff *skb)
{

View File

@ -65,7 +65,7 @@ static inline int nf_conntrack_confirm(struct sk_buff *skb)
struct nf_conn *ct = (struct nf_conn *)skb_nfct(skb);
int ret = NF_ACCEPT;
if (ct && !nf_ct_is_untracked(ct)) {
if (ct) {
if (!nf_ct_is_confirmed(ct))
ret = __nf_conntrack_confirm(skb);
if (likely(ret == NF_ACCEPT))

View File

@ -20,11 +20,11 @@ enum nf_ct_ecache_state {
struct nf_conntrack_ecache {
unsigned long cache; /* bitops want long */
unsigned long missed; /* missed events */
u16 missed; /* missed events */
u16 ctmask; /* bitmask of ct events to be delivered */
u16 expmask; /* bitmask of expect events to be delivered */
enum nf_ct_ecache_state state:8;/* ecache state */
u32 portid; /* netlink portid of destroyer */
enum nf_ct_ecache_state state; /* ecache state */
};
static inline struct nf_conntrack_ecache *

View File

@ -73,6 +73,7 @@ struct nf_conntrack_expect_policy {
};
#define NF_CT_EXPECT_CLASS_DEFAULT 0
#define NF_CT_EXPECT_MAX_CNT 255
int nf_conntrack_expect_pernet_init(struct net *net);
void nf_conntrack_expect_pernet_fini(struct net *net);
@ -104,6 +105,7 @@ static inline void nf_ct_unlink_expect(struct nf_conntrack_expect *exp)
void nf_ct_remove_expectations(struct nf_conn *ct);
void nf_ct_unexpect_related(struct nf_conntrack_expect *exp);
bool nf_ct_remove_expect(struct nf_conntrack_expect *exp);
/* Allocate space for an expectation: this is mandatory before calling
nf_ct_expect_related. You will have to call put afterwards. */

View File

@ -43,8 +43,8 @@ enum nf_ct_ext_id {
/* Extensions: optional stuff which isn't permanently in struct. */
struct nf_ct_ext {
struct rcu_head rcu;
u16 offset[NF_CT_EXT_NUM];
u16 len;
u8 offset[NF_CT_EXT_NUM];
u8 len;
char data[0];
};
@ -69,12 +69,7 @@ static inline void *__nf_ct_ext_find(const struct nf_conn *ct, u8 id)
((id##_TYPE *)__nf_ct_ext_find((ext), (id)))
/* Destroy all relationships */
void __nf_ct_ext_destroy(struct nf_conn *ct);
static inline void nf_ct_ext_destroy(struct nf_conn *ct)
{
if (ct->ext)
__nf_ct_ext_destroy(ct);
}
void nf_ct_ext_destroy(struct nf_conn *ct);
/* Free operation. If you want to free a object referred from private area,
* please implement __nf_ct_ext_free() and call it.
@ -86,15 +81,7 @@ static inline void nf_ct_ext_free(struct nf_conn *ct)
}
/* Add this type, returns pointer to data or NULL. */
void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id,
size_t var_alloc_len, gfp_t gfp);
#define nf_ct_ext_add(ct, id, gfp) \
((id##_TYPE *)__nf_ct_ext_add_length((ct), (id), 0, (gfp)))
#define nf_ct_ext_add_length(ct, id, len, gfp) \
((id##_TYPE *)__nf_ct_ext_add_length((ct), (id), (len), (gfp)))
#define NF_CT_EXT_F_PREALLOC 0x0001
void *nf_ct_ext_add(struct nf_conn *ct, enum nf_ct_ext_id id, gfp_t gfp);
struct nf_ct_ext_type {
/* Destroys relationships (can be NULL). */
@ -102,15 +89,11 @@ struct nf_ct_ext_type {
enum nf_ct_ext_id id;
unsigned int flags;
/* Length and min alignment. */
u8 len;
u8 align;
/* initial size of nf_ct_ext. */
u8 alloc_size;
};
int nf_ct_extend_register(struct nf_ct_ext_type *type);
void nf_ct_extend_unregister(struct nf_ct_ext_type *type);
int nf_ct_extend_register(const struct nf_ct_ext_type *type);
void nf_ct_extend_unregister(const struct nf_ct_ext_type *type);
#endif /* _NF_CONNTRACK_EXTEND_H */

View File

@ -29,9 +29,6 @@ struct nf_conntrack_helper {
struct module *me; /* pointer to self */
const struct nf_conntrack_expect_policy *expect_policy;
/* length of internal data, ie. sizeof(struct nf_ct_*_master) */
size_t data_len;
/* Tuple of things we will help (compared against server response) */
struct nf_conntrack_tuple tuple;
@ -49,9 +46,33 @@ struct nf_conntrack_helper {
unsigned int expect_class_max;
unsigned int flags;
unsigned int queue_num; /* For user-space helpers. */
/* For user-space helpers: */
unsigned int queue_num;
/* length of userspace private data stored in nf_conn_help->data */
u16 data_len;
};
/* Must be kept in sync with the classes defined by helpers */
#define NF_CT_MAX_EXPECT_CLASSES 4
/* nf_conn feature for connections that have a helper */
struct nf_conn_help {
/* Helper. if any */
struct nf_conntrack_helper __rcu *helper;
struct hlist_head expectations;
/* Current number of expected connections */
u8 expecting[NF_CT_MAX_EXPECT_CLASSES];
/* private helper information. */
char data[32] __aligned(8);
};
#define NF_CT_HELPER_BUILD_BUG_ON(structsize) \
BUILD_BUG_ON((structsize) > FIELD_SIZEOF(struct nf_conn_help, data))
struct nf_conntrack_helper *__nf_conntrack_helper_find(const char *name,
u16 l3num, u8 protonum);
@ -62,7 +83,7 @@ void nf_ct_helper_init(struct nf_conntrack_helper *helper,
u16 l3num, u16 protonum, const char *name,
u16 default_port, u16 spec_port, u32 id,
const struct nf_conntrack_expect_policy *exp_pol,
u32 expect_class_max, u32 data_len,
u32 expect_class_max,
int (*help)(struct sk_buff *skb, unsigned int protoff,
struct nf_conn *ct,
enum ip_conntrack_info ctinfo),

View File

@ -58,6 +58,9 @@ struct nf_conntrack_l4proto {
unsigned int dataoff,
u_int8_t pf, unsigned int hooknum);
/* called by gc worker if table is full */
bool (*can_early_drop)(const struct nf_conn *ct);
/* Print out the per-protocol part of the tuple. Return like seq_* */
void (*print_tuple)(struct seq_file *s,
const struct nf_conntrack_tuple *);

View File

@ -52,6 +52,8 @@ struct synproxy_stats {
struct synproxy_net {
struct nf_conn *tmpl;
struct synproxy_stats __percpu *stats;
unsigned int hook_ref4;
unsigned int hook_ref6;
};
extern unsigned int synproxy_net_id;

View File

@ -67,7 +67,7 @@ static inline bool nf_nat_oif_changed(unsigned int hooknum,
{
#if IS_ENABLED(CONFIG_NF_NAT_MASQUERADE_IPV4) || \
IS_ENABLED(CONFIG_NF_NAT_MASQUERADE_IPV6)
return nat->masq_index && hooknum == NF_INET_POST_ROUTING &&
return nat && nat->masq_index && hooknum == NF_INET_POST_ROUTING &&
CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL &&
nat->masq_index != out->ifindex;
#else

View File

@ -7,31 +7,31 @@
struct sk_buff;
/* These return true or false. */
int __nf_nat_mangle_tcp_packet(struct sk_buff *skb, struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
unsigned int protoff, unsigned int match_offset,
unsigned int match_len, const char *rep_buffer,
unsigned int rep_len, bool adjust);
bool __nf_nat_mangle_tcp_packet(struct sk_buff *skb, struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
unsigned int protoff, unsigned int match_offset,
unsigned int match_len, const char *rep_buffer,
unsigned int rep_len, bool adjust);
static inline int nf_nat_mangle_tcp_packet(struct sk_buff *skb,
struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
unsigned int protoff,
unsigned int match_offset,
unsigned int match_len,
const char *rep_buffer,
unsigned int rep_len)
static inline bool nf_nat_mangle_tcp_packet(struct sk_buff *skb,
struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
unsigned int protoff,
unsigned int match_offset,
unsigned int match_len,
const char *rep_buffer,
unsigned int rep_len)
{
return __nf_nat_mangle_tcp_packet(skb, ct, ctinfo, protoff,
match_offset, match_len,
rep_buffer, rep_len, true);
}
int nf_nat_mangle_udp_packet(struct sk_buff *skb, struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
unsigned int protoff, unsigned int match_offset,
unsigned int match_len, const char *rep_buffer,
unsigned int rep_len);
bool nf_nat_mangle_udp_packet(struct sk_buff *skb, struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
unsigned int protoff, unsigned int match_offset,
unsigned int match_len, const char *rep_buffer,
unsigned int rep_len);
/* Setup NAT on this expected conntrack so it follows master, but goes
* to port ct->master->saved_proto. */

View File

@ -24,8 +24,7 @@ struct nf_queue_entry {
struct nf_queue_handler {
int (*outfn)(struct nf_queue_entry *entry,
unsigned int queuenum);
void (*nf_hook_drop)(struct net *net,
const struct nf_hook_entry *hooks);
unsigned int (*nf_hook_drop)(struct net *net);
};
void nf_register_queue_handler(struct net *net, const struct nf_queue_handler *qh);

View File

@ -911,6 +911,11 @@ static inline struct nft_base_chain *nft_base_chain(const struct nft_chain *chai
return container_of(chain, struct nft_base_chain, chain);
}
static inline bool nft_is_base_chain(const struct nft_chain *chain)
{
return chain->flags & NFT_BASE_CHAIN;
}
int __nft_release_basechain(struct nft_ctx *ctx);
unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv);

View File

@ -28,12 +28,14 @@ enum ip_conntrack_info {
/* only for userspace compatibility */
#ifndef __KERNEL__
IP_CT_NEW_REPLY = IP_CT_NUMBER,
#else
IP_CT_UNTRACKED = 7,
#endif
};
#define NF_CT_STATE_INVALID_BIT (1 << 0)
#define NF_CT_STATE_BIT(ctinfo) (1 << ((ctinfo) % IP_CT_IS_REPLY + 1))
#define NF_CT_STATE_UNTRACKED_BIT (1 << (IP_CT_NUMBER + 1))
#define NF_CT_STATE_UNTRACKED_BIT (1 << (IP_CT_UNTRACKED + 1))
/* Bitset representing status of connection. */
enum ip_conntrack_status {
@ -94,7 +96,7 @@ enum ip_conntrack_status {
IPS_TEMPLATE_BIT = 11,
IPS_TEMPLATE = (1 << IPS_TEMPLATE_BIT),
/* Conntrack is a fake untracked entry */
/* Conntrack is a fake untracked entry. Obsolete and not used anymore */
IPS_UNTRACKED_BIT = 12,
IPS_UNTRACKED = (1 << IPS_UNTRACKED_BIT),
@ -117,6 +119,9 @@ enum ip_conntrack_events {
IPCT_NATSEQADJ = IPCT_SEQADJ,
IPCT_SECMARK, /* new security mark has been set */
IPCT_LABEL, /* new connlabel has been set */
#ifdef __KERNEL__
__IPCT_MAX
#endif
};
enum ip_conntrack_expect_events {

View File

@ -901,6 +901,7 @@ enum nft_rt_attributes {
* @NFT_CT_BYTES: conntrack bytes
* @NFT_CT_AVGPKT: conntrack average bytes per packet
* @NFT_CT_ZONE: conntrack zone
* @NFT_CT_EVENTMASK: ctnetlink events to be generated for this conntrack
*/
enum nft_ct_keys {
NFT_CT_STATE,
@ -921,6 +922,7 @@ enum nft_ct_keys {
NFT_CT_BYTES,
NFT_CT_AVGPKT,
NFT_CT_ZONE,
NFT_CT_EVENTMASK,
};
/**

View File

@ -65,13 +65,13 @@ static int ebt_broute(struct sk_buff *skb)
static int __net_init broute_net_init(struct net *net)
{
net->xt.broute_table = ebt_register_table(net, &broute_table);
net->xt.broute_table = ebt_register_table(net, &broute_table, NULL);
return PTR_ERR_OR_ZERO(net->xt.broute_table);
}
static void __net_exit broute_net_exit(struct net *net)
{
ebt_unregister_table(net, net->xt.broute_table);
ebt_unregister_table(net, net->xt.broute_table, NULL);
}
static struct pernet_operations broute_net_ops = {

View File

@ -93,13 +93,13 @@ static struct nf_hook_ops ebt_ops_filter[] __read_mostly = {
static int __net_init frame_filter_net_init(struct net *net)
{
net->xt.frame_filter = ebt_register_table(net, &frame_filter);
net->xt.frame_filter = ebt_register_table(net, &frame_filter, ebt_ops_filter);
return PTR_ERR_OR_ZERO(net->xt.frame_filter);
}
static void __net_exit frame_filter_net_exit(struct net *net)
{
ebt_unregister_table(net, net->xt.frame_filter);
ebt_unregister_table(net, net->xt.frame_filter, ebt_ops_filter);
}
static struct pernet_operations frame_filter_net_ops = {
@ -109,20 +109,11 @@ static struct pernet_operations frame_filter_net_ops = {
static int __init ebtable_filter_init(void)
{
int ret;
ret = register_pernet_subsys(&frame_filter_net_ops);
if (ret < 0)
return ret;
ret = nf_register_hooks(ebt_ops_filter, ARRAY_SIZE(ebt_ops_filter));
if (ret < 0)
unregister_pernet_subsys(&frame_filter_net_ops);
return ret;
return register_pernet_subsys(&frame_filter_net_ops);
}
static void __exit ebtable_filter_fini(void)
{
nf_unregister_hooks(ebt_ops_filter, ARRAY_SIZE(ebt_ops_filter));
unregister_pernet_subsys(&frame_filter_net_ops);
}

View File

@ -93,13 +93,13 @@ static struct nf_hook_ops ebt_ops_nat[] __read_mostly = {
static int __net_init frame_nat_net_init(struct net *net)
{
net->xt.frame_nat = ebt_register_table(net, &frame_nat);
net->xt.frame_nat = ebt_register_table(net, &frame_nat, ebt_ops_nat);
return PTR_ERR_OR_ZERO(net->xt.frame_nat);
}
static void __net_exit frame_nat_net_exit(struct net *net)
{
ebt_unregister_table(net, net->xt.frame_nat);
ebt_unregister_table(net, net->xt.frame_nat, ebt_ops_nat);
}
static struct pernet_operations frame_nat_net_ops = {
@ -109,20 +109,11 @@ static struct pernet_operations frame_nat_net_ops = {
static int __init ebtable_nat_init(void)
{
int ret;
ret = register_pernet_subsys(&frame_nat_net_ops);
if (ret < 0)
return ret;
ret = nf_register_hooks(ebt_ops_nat, ARRAY_SIZE(ebt_ops_nat));
if (ret < 0)
unregister_pernet_subsys(&frame_nat_net_ops);
return ret;
return register_pernet_subsys(&frame_nat_net_ops);
}
static void __exit ebtable_nat_fini(void)
{
nf_unregister_hooks(ebt_ops_nat, ARRAY_SIZE(ebt_ops_nat));
unregister_pernet_subsys(&frame_nat_net_ops);
}

View File

@ -1157,8 +1157,30 @@ free_newinfo:
return ret;
}
static void __ebt_unregister_table(struct net *net, struct ebt_table *table)
{
int i;
mutex_lock(&ebt_mutex);
list_del(&table->list);
mutex_unlock(&ebt_mutex);
EBT_ENTRY_ITERATE(table->private->entries, table->private->entries_size,
ebt_cleanup_entry, net, NULL);
if (table->private->nentries)
module_put(table->me);
vfree(table->private->entries);
if (table->private->chainstack) {
for_each_possible_cpu(i)
vfree(table->private->chainstack[i]);
vfree(table->private->chainstack);
}
vfree(table->private);
kfree(table);
}
struct ebt_table *
ebt_register_table(struct net *net, const struct ebt_table *input_table)
ebt_register_table(struct net *net, const struct ebt_table *input_table,
const struct nf_hook_ops *ops)
{
struct ebt_table_info *newinfo;
struct ebt_table *t, *table;
@ -1238,6 +1260,16 @@ ebt_register_table(struct net *net, const struct ebt_table *input_table)
}
list_add(&table->list, &net->xt.tables[NFPROTO_BRIDGE]);
mutex_unlock(&ebt_mutex);
if (!ops)
return table;
ret = nf_register_net_hooks(net, ops, hweight32(table->valid_hooks));
if (ret) {
__ebt_unregister_table(net, table);
return ERR_PTR(ret);
}
return table;
free_unlock:
mutex_unlock(&ebt_mutex);
@ -1256,29 +1288,12 @@ out:
return ERR_PTR(ret);
}
void ebt_unregister_table(struct net *net, struct ebt_table *table)
void ebt_unregister_table(struct net *net, struct ebt_table *table,
const struct nf_hook_ops *ops)
{
int i;
if (!table) {
BUGPRINT("Request to unregister NULL table!!!\n");
return;
}
mutex_lock(&ebt_mutex);
list_del(&table->list);
mutex_unlock(&ebt_mutex);
EBT_ENTRY_ITERATE(table->private->entries, table->private->entries_size,
ebt_cleanup_entry, net, NULL);
if (table->private->nentries)
module_put(table->me);
vfree(table->private->entries);
if (table->private->chainstack) {
for_each_possible_cpu(i)
vfree(table->private->chainstack[i]);
vfree(table->private->chainstack);
}
vfree(table->private);
kfree(table);
if (ops)
nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks));
__ebt_unregister_table(net, table);
}
/* userspace just supplied us with counters */
@ -1713,7 +1728,7 @@ static int compat_copy_entry_to_user(struct ebt_entry *e, void __user **dstptr,
if (*size < sizeof(*ce))
return -EINVAL;
ce = (struct ebt_entry __user *)*dstptr;
ce = *dstptr;
if (copy_to_user(ce, e, sizeof(*ce)))
return -EFAULT;

View File

@ -111,7 +111,7 @@ nft_meta_bridge_select_ops(const struct nft_ctx *ctx,
static struct nft_expr_type nft_meta_bridge_type __read_mostly = {
.family = NFPROTO_BRIDGE,
.name = "meta",
.select_ops = &nft_meta_bridge_select_ops,
.select_ops = nft_meta_bridge_select_ops,
.policy = nft_meta_policy,
.maxattr = NFTA_META_MAX,
.owner = THIS_MODULE,

View File

@ -134,7 +134,7 @@ static int __init dn_rtmsg_init(void)
return -ENOMEM;
}
rv = nf_register_hook(&dnrmg_ops);
rv = nf_register_net_hook(&init_net, &dnrmg_ops);
if (rv) {
netlink_kernel_release(dnrmg);
}
@ -144,7 +144,7 @@ static int __init dn_rtmsg_init(void)
static void __exit dn_rtmsg_fini(void)
{
nf_unregister_hook(&dnrmg_ops);
nf_unregister_net_hook(&init_net, &dnrmg_ops);
netlink_kernel_release(dnrmg);
}

View File

@ -309,8 +309,7 @@ static int mark_source_chains(const struct xt_table_info *newinfo,
*/
for (hook = 0; hook < NF_ARP_NUMHOOKS; hook++) {
unsigned int pos = newinfo->hook_entry[hook];
struct arpt_entry *e
= (struct arpt_entry *)(entry0 + pos);
struct arpt_entry *e = entry0 + pos;
if (!(valid_hooks & (1 << hook)))
continue;
@ -354,14 +353,12 @@ static int mark_source_chains(const struct xt_table_info *newinfo,
if (pos == oldpos)
goto next;
e = (struct arpt_entry *)
(entry0 + pos);
e = entry0 + pos;
} while (oldpos == pos + e->next_offset);
/* Move along one */
size = e->next_offset;
e = (struct arpt_entry *)
(entry0 + pos + size);
e = entry0 + pos + size;
if (pos + size >= newinfo->size)
return 0;
e->counters.pcnt = pos;
@ -376,16 +373,14 @@ static int mark_source_chains(const struct xt_table_info *newinfo,
if (!xt_find_jump_offset(offsets, newpos,
newinfo->number))
return 0;
e = (struct arpt_entry *)
(entry0 + newpos);
e = entry0 + newpos;
} else {
/* ... this is a fallthru */
newpos = pos + e->next_offset;
if (newpos >= newinfo->size)
return 0;
}
e = (struct arpt_entry *)
(entry0 + newpos);
e = entry0 + newpos;
e->counters.pcnt = pos;
pos = newpos;
}
@ -681,7 +676,7 @@ static int copy_entries_to_user(unsigned int total_size,
for (off = 0, num = 0; off < total_size; off += e->next_offset, num++){
const struct xt_entry_target *t;
e = (struct arpt_entry *)(loc_cpu_entry + off);
e = loc_cpu_entry + off;
if (copy_to_user(userptr + off, e, sizeof(*e))) {
ret = -EFAULT;
goto free_counters;
@ -1128,7 +1123,7 @@ compat_copy_entry_from_user(struct compat_arpt_entry *e, void **dstptr,
int h;
origsize = *size;
de = (struct arpt_entry *)*dstptr;
de = *dstptr;
memcpy(de, e, sizeof(struct arpt_entry));
memcpy(&de->counters, &e->counters, sizeof(e->counters));
@ -1322,7 +1317,7 @@ static int compat_copy_entry_to_user(struct arpt_entry *e, void __user **dstptr,
int ret;
origsize = *size;
ce = (struct compat_arpt_entry __user *)*dstptr;
ce = *dstptr;
if (copy_to_user(ce, e, sizeof(struct arpt_entry)) != 0 ||
copy_to_user(&ce->counters, &counters[i],
sizeof(counters[i])) != 0)

View File

@ -382,7 +382,7 @@ mark_source_chains(const struct xt_table_info *newinfo,
to 0 as we leave), and comefrom to save source hook bitmask */
for (hook = 0; hook < NF_INET_NUMHOOKS; hook++) {
unsigned int pos = newinfo->hook_entry[hook];
struct ipt_entry *e = (struct ipt_entry *)(entry0 + pos);
struct ipt_entry *e = entry0 + pos;
if (!(valid_hooks & (1 << hook)))
continue;
@ -424,14 +424,12 @@ mark_source_chains(const struct xt_table_info *newinfo,
if (pos == oldpos)
goto next;
e = (struct ipt_entry *)
(entry0 + pos);
e = entry0 + pos;
} while (oldpos == pos + e->next_offset);
/* Move along one */
size = e->next_offset;
e = (struct ipt_entry *)
(entry0 + pos + size);
e = entry0 + pos + size;
if (pos + size >= newinfo->size)
return 0;
e->counters.pcnt = pos;
@ -446,16 +444,14 @@ mark_source_chains(const struct xt_table_info *newinfo,
if (!xt_find_jump_offset(offsets, newpos,
newinfo->number))
return 0;
e = (struct ipt_entry *)
(entry0 + newpos);
e = entry0 + newpos;
} else {
/* ... this is a fallthru */
newpos = pos + e->next_offset;
if (newpos >= newinfo->size)
return 0;
}
e = (struct ipt_entry *)
(entry0 + newpos);
e = entry0 + newpos;
e->counters.pcnt = pos;
pos = newpos;
}
@ -834,7 +830,7 @@ copy_entries_to_user(unsigned int total_size,
const struct xt_entry_match *m;
const struct xt_entry_target *t;
e = (struct ipt_entry *)(loc_cpu_entry + off);
e = loc_cpu_entry + off;
if (copy_to_user(userptr + off, e, sizeof(*e))) {
ret = -EFAULT;
goto free_counters;
@ -1229,7 +1225,7 @@ compat_copy_entry_to_user(struct ipt_entry *e, void __user **dstptr,
int ret = 0;
origsize = *size;
ce = (struct compat_ipt_entry __user *)*dstptr;
ce = *dstptr;
if (copy_to_user(ce, e, sizeof(struct ipt_entry)) != 0 ||
copy_to_user(&ce->counters, &counters[i],
sizeof(counters[i])) != 0)
@ -1366,7 +1362,7 @@ compat_copy_entry_from_user(struct compat_ipt_entry *e, void **dstptr,
struct xt_entry_match *ematch;
origsize = *size;
de = (struct ipt_entry *)*dstptr;
de = *dstptr;
memcpy(de, e, sizeof(struct ipt_entry));
memcpy(&de->counters, &e->counters, sizeof(e->counters));

View File

@ -293,12 +293,16 @@ synproxy_tg4(struct sk_buff *skb, const struct xt_action_param *par)
XT_SYNPROXY_OPT_ECN);
synproxy_send_client_synack(net, skb, th, &opts);
return NF_DROP;
consume_skb(skb);
return NF_STOLEN;
} else if (th->ack && !(th->fin || th->rst || th->syn)) {
/* ACK from client */
synproxy_recv_client_ack(net, skb, th, &opts, ntohl(th->seq));
return NF_DROP;
if (synproxy_recv_client_ack(net, skb, th, &opts, ntohl(th->seq))) {
consume_skb(skb);
return NF_STOLEN;
} else {
return NF_DROP;
}
}
return XT_CONTINUE;
@ -367,10 +371,13 @@ static unsigned int ipv4_synproxy_hook(void *priv,
* number match the one of first SYN.
*/
if (synproxy_recv_client_ack(net, skb, th, &opts,
ntohl(th->seq) + 1))
ntohl(th->seq) + 1)) {
this_cpu_inc(snet->stats->cookie_retrans);
return NF_DROP;
consume_skb(skb);
return NF_STOLEN;
} else {
return NF_DROP;
}
}
synproxy->isn = ntohl(th->ack_seq);
@ -409,33 +416,6 @@ static unsigned int ipv4_synproxy_hook(void *priv,
return NF_ACCEPT;
}
static int synproxy_tg4_check(const struct xt_tgchk_param *par)
{
const struct ipt_entry *e = par->entryinfo;
if (e->ip.proto != IPPROTO_TCP ||
e->ip.invflags & XT_INV_PROTO)
return -EINVAL;
return nf_ct_netns_get(par->net, par->family);
}
static void synproxy_tg4_destroy(const struct xt_tgdtor_param *par)
{
nf_ct_netns_put(par->net, par->family);
}
static struct xt_target synproxy_tg4_reg __read_mostly = {
.name = "SYNPROXY",
.family = NFPROTO_IPV4,
.hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD),
.target = synproxy_tg4,
.targetsize = sizeof(struct xt_synproxy_info),
.checkentry = synproxy_tg4_check,
.destroy = synproxy_tg4_destroy,
.me = THIS_MODULE,
};
static struct nf_hook_ops ipv4_synproxy_ops[] __read_mostly = {
{
.hook = ipv4_synproxy_hook,
@ -451,31 +431,63 @@ static struct nf_hook_ops ipv4_synproxy_ops[] __read_mostly = {
},
};
static int __init synproxy_tg4_init(void)
static int synproxy_tg4_check(const struct xt_tgchk_param *par)
{
struct synproxy_net *snet = synproxy_pernet(par->net);
const struct ipt_entry *e = par->entryinfo;
int err;
err = nf_register_hooks(ipv4_synproxy_ops,
ARRAY_SIZE(ipv4_synproxy_ops));
if (err < 0)
goto err1;
if (e->ip.proto != IPPROTO_TCP ||
e->ip.invflags & XT_INV_PROTO)
return -EINVAL;
err = xt_register_target(&synproxy_tg4_reg);
if (err < 0)
goto err2;
err = nf_ct_netns_get(par->net, par->family);
if (err)
return err;
return 0;
if (snet->hook_ref4 == 0) {
err = nf_register_net_hooks(par->net, ipv4_synproxy_ops,
ARRAY_SIZE(ipv4_synproxy_ops));
if (err) {
nf_ct_netns_put(par->net, par->family);
return err;
}
}
err2:
nf_unregister_hooks(ipv4_synproxy_ops, ARRAY_SIZE(ipv4_synproxy_ops));
err1:
snet->hook_ref4++;
return err;
}
static void synproxy_tg4_destroy(const struct xt_tgdtor_param *par)
{
struct synproxy_net *snet = synproxy_pernet(par->net);
snet->hook_ref4--;
if (snet->hook_ref4 == 0)
nf_unregister_net_hooks(par->net, ipv4_synproxy_ops,
ARRAY_SIZE(ipv4_synproxy_ops));
nf_ct_netns_put(par->net, par->family);
}
static struct xt_target synproxy_tg4_reg __read_mostly = {
.name = "SYNPROXY",
.family = NFPROTO_IPV4,
.hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD),
.target = synproxy_tg4,
.targetsize = sizeof(struct xt_synproxy_info),
.checkentry = synproxy_tg4_check,
.destroy = synproxy_tg4_destroy,
.me = THIS_MODULE,
};
static int __init synproxy_tg4_init(void)
{
return xt_register_target(&synproxy_tg4_reg);
}
static void __exit synproxy_tg4_exit(void)
{
xt_unregister_target(&synproxy_tg4_reg);
nf_unregister_hooks(ipv4_synproxy_ops, ARRAY_SIZE(ipv4_synproxy_ops));
}
module_init(synproxy_tg4_init);

View File

@ -69,8 +69,7 @@ void nf_dup_ipv4(struct net *net, struct sk_buff *skb, unsigned int hooknum,
#if IS_ENABLED(CONFIG_NF_CONNTRACK)
/* Avoid counting cloned packets towards the original connection. */
nf_reset(skb);
nf_ct_set(skb, nf_ct_untracked_get(), IP_CT_NEW);
nf_conntrack_get(skb_nfct(skb));
nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
#endif
/*
* If we are in PREROUTING/INPUT, decrease the TTL to mitigate potential

View File

@ -264,13 +264,7 @@ nf_nat_ipv4_fn(void *priv, struct sk_buff *skb,
if (!ct)
return NF_ACCEPT;
/* Don't try to NAT if this packet is not conntracked */
if (nf_ct_is_untracked(ct))
return NF_ACCEPT;
nat = nf_ct_nat_ext_add(ct);
if (nat == NULL)
return NF_ACCEPT;
nat = nfct_nat(ct);
switch (ctinfo) {
case IP_CT_RELATED:

View File

@ -37,7 +37,6 @@ nf_nat_masquerade_ipv4(struct sk_buff *skb, unsigned int hooknum,
NF_CT_ASSERT(hooknum == NF_INET_POST_ROUTING);
ct = nf_ct_get(skb, &ctinfo);
nat = nfct_nat(ct);
NF_CT_ASSERT(ct && (ctinfo == IP_CT_NEW || ctinfo == IP_CT_RELATED ||
ctinfo == IP_CT_RELATED_REPLY));
@ -56,7 +55,9 @@ nf_nat_masquerade_ipv4(struct sk_buff *skb, unsigned int hooknum,
return NF_DROP;
}
nat->masq_index = out->ifindex;
nat = nf_ct_nat_ext_add(ct);
if (nat)
nat->masq_index = out->ifindex;
/* Transfer from original range. */
memset(&newrange.min_addr, 0, sizeof(newrange.min_addr));

View File

@ -49,9 +49,14 @@ static void pptp_nat_expected(struct nf_conn *ct,
const struct nf_ct_pptp_master *ct_pptp_info;
const struct nf_nat_pptp *nat_pptp_info;
struct nf_nat_range range;
struct nf_conn_nat *nat;
nat = nf_ct_nat_ext_add(ct);
if (WARN_ON_ONCE(!nat))
return;
nat_pptp_info = &nat->help.nat_pptp_info;
ct_pptp_info = nfct_help_data(master);
nat_pptp_info = &nfct_nat(master)->help.nat_pptp_info;
/* And here goes the grand finale of corrosion... */
if (exp->dir == IP_CT_DIR_ORIGINAL) {
@ -120,13 +125,17 @@ pptp_outbound_pkt(struct sk_buff *skb,
{
struct nf_ct_pptp_master *ct_pptp_info;
struct nf_conn_nat *nat = nfct_nat(ct);
struct nf_nat_pptp *nat_pptp_info;
u_int16_t msg;
__be16 new_callid;
unsigned int cid_off;
if (WARN_ON_ONCE(!nat))
return NF_DROP;
nat_pptp_info = &nat->help.nat_pptp_info;
ct_pptp_info = nfct_help_data(ct);
nat_pptp_info = &nfct_nat(ct)->help.nat_pptp_info;
new_callid = ct_pptp_info->pns_call_id;
@ -177,11 +186,11 @@ pptp_outbound_pkt(struct sk_buff *skb,
ntohs(REQ_CID(pptpReq, cid_off)), ntohs(new_callid));
/* mangle packet */
if (nf_nat_mangle_tcp_packet(skb, ct, ctinfo, protoff,
cid_off + sizeof(struct pptp_pkt_hdr) +
sizeof(struct PptpControlHeader),
sizeof(new_callid), (char *)&new_callid,
sizeof(new_callid)) == 0)
if (!nf_nat_mangle_tcp_packet(skb, ct, ctinfo, protoff,
cid_off + sizeof(struct pptp_pkt_hdr) +
sizeof(struct PptpControlHeader),
sizeof(new_callid), (char *)&new_callid,
sizeof(new_callid)))
return NF_DROP;
return NF_ACCEPT;
}
@ -191,11 +200,15 @@ pptp_exp_gre(struct nf_conntrack_expect *expect_orig,
struct nf_conntrack_expect *expect_reply)
{
const struct nf_conn *ct = expect_orig->master;
struct nf_conn_nat *nat = nfct_nat(ct);
struct nf_ct_pptp_master *ct_pptp_info;
struct nf_nat_pptp *nat_pptp_info;
if (WARN_ON_ONCE(!nat))
return;
nat_pptp_info = &nat->help.nat_pptp_info;
ct_pptp_info = nfct_help_data(ct);
nat_pptp_info = &nfct_nat(ct)->help.nat_pptp_info;
/* save original PAC call ID in nat_info */
nat_pptp_info->pac_call_id = ct_pptp_info->pac_call_id;
@ -223,11 +236,15 @@ pptp_inbound_pkt(struct sk_buff *skb,
union pptp_ctrl_union *pptpReq)
{
const struct nf_nat_pptp *nat_pptp_info;
struct nf_conn_nat *nat = nfct_nat(ct);
u_int16_t msg;
__be16 new_pcid;
unsigned int pcid_off;
nat_pptp_info = &nfct_nat(ct)->help.nat_pptp_info;
if (WARN_ON_ONCE(!nat))
return NF_DROP;
nat_pptp_info = &nat->help.nat_pptp_info;
new_pcid = nat_pptp_info->pns_call_id;
switch (msg = ntohs(ctlh->messageType)) {
@ -271,11 +288,11 @@ pptp_inbound_pkt(struct sk_buff *skb,
pr_debug("altering peer call id from 0x%04x to 0x%04x\n",
ntohs(REQ_CID(pptpReq, pcid_off)), ntohs(new_pcid));
if (nf_nat_mangle_tcp_packet(skb, ct, ctinfo, protoff,
pcid_off + sizeof(struct pptp_pkt_hdr) +
sizeof(struct PptpControlHeader),
sizeof(new_pcid), (char *)&new_pcid,
sizeof(new_pcid)) == 0)
if (!nf_nat_mangle_tcp_packet(skb, ct, ctinfo, protoff,
pcid_off + sizeof(struct pptp_pkt_hdr) +
sizeof(struct PptpControlHeader),
sizeof(new_pcid), (char *)&new_pcid,
sizeof(new_pcid)))
return NF_DROP;
return NF_ACCEPT;
}

View File

@ -827,8 +827,8 @@ static unsigned char snmp_object_decode(struct asn1_ctx *ctx,
return 1;
}
static unsigned char snmp_request_decode(struct asn1_ctx *ctx,
struct snmp_request *request)
static unsigned char noinline_for_stack
snmp_request_decode(struct asn1_ctx *ctx, struct snmp_request *request)
{
unsigned int cls, con, tag;
unsigned char *end;
@ -920,10 +920,10 @@ static inline void mangle_address(unsigned char *begin,
}
}
static unsigned char snmp_trap_decode(struct asn1_ctx *ctx,
struct snmp_v1_trap *trap,
const struct oct1_map *map,
__sum16 *check)
static unsigned char noinline_for_stack
snmp_trap_decode(struct asn1_ctx *ctx, struct snmp_v1_trap *trap,
const struct oct1_map *map,
__sum16 *check)
{
unsigned int cls, con, tag, len;
unsigned char *end;

View File

@ -139,7 +139,7 @@ struct sock *nf_sk_lookup_slow_v4(struct net *net, const struct sk_buff *skb,
* SNAT-ted connection.
*/
ct = nf_ct_get(skb, &ctinfo);
if (ct && !nf_ct_is_untracked(ct) &&
if (ct &&
((iph->protocol != IPPROTO_ICMP &&
ctinfo == IP_CT_ESTABLISHED_REPLY) ||
(iph->protocol == IPPROTO_ICMP &&

View File

@ -212,7 +212,7 @@ nft_fib4_select_ops(const struct nft_ctx *ctx,
static struct nft_expr_type nft_fib4_type __read_mostly = {
.name = "fib",
.select_ops = &nft_fib4_select_ops,
.select_ops = nft_fib4_select_ops,
.policy = nft_fib_policy,
.maxattr = NFTA_FIB_MAX,
.family = NFPROTO_IPV4,

View File

@ -51,15 +51,6 @@ void *ip6t_alloc_initial_table(const struct xt_table *info)
}
EXPORT_SYMBOL_GPL(ip6t_alloc_initial_table);
/*
We keep a set of rules for each CPU, so we can avoid write-locking
them in the softirq when updating the counters and therefore
only need to read-lock in the softirq; doing a write_lock_bh() in user
context stops packets coming through and allows user context to read
the counters or update the rules.
Hence the start of any table is given by get_table() below. */
/* Returns whether matches rule or not. */
/* Performance critical - called for every packet */
static inline bool
@ -411,7 +402,7 @@ mark_source_chains(const struct xt_table_info *newinfo,
to 0 as we leave), and comefrom to save source hook bitmask */
for (hook = 0; hook < NF_INET_NUMHOOKS; hook++) {
unsigned int pos = newinfo->hook_entry[hook];
struct ip6t_entry *e = (struct ip6t_entry *)(entry0 + pos);
struct ip6t_entry *e = entry0 + pos;
if (!(valid_hooks & (1 << hook)))
continue;
@ -453,14 +444,12 @@ mark_source_chains(const struct xt_table_info *newinfo,
if (pos == oldpos)
goto next;
e = (struct ip6t_entry *)
(entry0 + pos);
e = entry0 + pos;
} while (oldpos == pos + e->next_offset);
/* Move along one */
size = e->next_offset;
e = (struct ip6t_entry *)
(entry0 + pos + size);
e = entry0 + pos + size;
if (pos + size >= newinfo->size)
return 0;
e->counters.pcnt = pos;
@ -475,16 +464,14 @@ mark_source_chains(const struct xt_table_info *newinfo,
if (!xt_find_jump_offset(offsets, newpos,
newinfo->number))
return 0;
e = (struct ip6t_entry *)
(entry0 + newpos);
e = entry0 + newpos;
} else {
/* ... this is a fallthru */
newpos = pos + e->next_offset;
if (newpos >= newinfo->size)
return 0;
}
e = (struct ip6t_entry *)
(entry0 + newpos);
e = entry0 + newpos;
e->counters.pcnt = pos;
pos = newpos;
}
@ -863,7 +850,7 @@ copy_entries_to_user(unsigned int total_size,
const struct xt_entry_match *m;
const struct xt_entry_target *t;
e = (struct ip6t_entry *)(loc_cpu_entry + off);
e = loc_cpu_entry + off;
if (copy_to_user(userptr + off, e, sizeof(*e))) {
ret = -EFAULT;
goto free_counters;
@ -1258,7 +1245,7 @@ compat_copy_entry_to_user(struct ip6t_entry *e, void __user **dstptr,
int ret = 0;
origsize = *size;
ce = (struct compat_ip6t_entry __user *)*dstptr;
ce = *dstptr;
if (copy_to_user(ce, e, sizeof(struct ip6t_entry)) != 0 ||
copy_to_user(&ce->counters, &counters[i],
sizeof(counters[i])) != 0)
@ -1394,7 +1381,7 @@ compat_copy_entry_from_user(struct compat_ip6t_entry *e, void **dstptr,
struct xt_entry_match *ematch;
origsize = *size;
de = (struct ip6t_entry *)*dstptr;
de = *dstptr;
memcpy(de, e, sizeof(struct ip6t_entry));
memcpy(&de->counters, &e->counters, sizeof(e->counters));

View File

@ -307,12 +307,17 @@ synproxy_tg6(struct sk_buff *skb, const struct xt_action_param *par)
XT_SYNPROXY_OPT_ECN);
synproxy_send_client_synack(net, skb, th, &opts);
return NF_DROP;
consume_skb(skb);
return NF_STOLEN;
} else if (th->ack && !(th->fin || th->rst || th->syn)) {
/* ACK from client */
synproxy_recv_client_ack(net, skb, th, &opts, ntohl(th->seq));
return NF_DROP;
if (synproxy_recv_client_ack(net, skb, th, &opts, ntohl(th->seq))) {
consume_skb(skb);
return NF_STOLEN;
} else {
return NF_DROP;
}
}
return XT_CONTINUE;
@ -388,10 +393,13 @@ static unsigned int ipv6_synproxy_hook(void *priv,
* number match the one of first SYN.
*/
if (synproxy_recv_client_ack(net, skb, th, &opts,
ntohl(th->seq) + 1))
ntohl(th->seq) + 1)) {
this_cpu_inc(snet->stats->cookie_retrans);
return NF_DROP;
consume_skb(skb);
return NF_STOLEN;
} else {
return NF_DROP;
}
}
synproxy->isn = ntohl(th->ack_seq);
@ -430,34 +438,6 @@ static unsigned int ipv6_synproxy_hook(void *priv,
return NF_ACCEPT;
}
static int synproxy_tg6_check(const struct xt_tgchk_param *par)
{
const struct ip6t_entry *e = par->entryinfo;
if (!(e->ipv6.flags & IP6T_F_PROTO) ||
e->ipv6.proto != IPPROTO_TCP ||
e->ipv6.invflags & XT_INV_PROTO)
return -EINVAL;
return nf_ct_netns_get(par->net, par->family);
}
static void synproxy_tg6_destroy(const struct xt_tgdtor_param *par)
{
nf_ct_netns_put(par->net, par->family);
}
static struct xt_target synproxy_tg6_reg __read_mostly = {
.name = "SYNPROXY",
.family = NFPROTO_IPV6,
.hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD),
.target = synproxy_tg6,
.targetsize = sizeof(struct xt_synproxy_info),
.checkentry = synproxy_tg6_check,
.destroy = synproxy_tg6_destroy,
.me = THIS_MODULE,
};
static struct nf_hook_ops ipv6_synproxy_ops[] __read_mostly = {
{
.hook = ipv6_synproxy_hook,
@ -473,31 +453,64 @@ static struct nf_hook_ops ipv6_synproxy_ops[] __read_mostly = {
},
};
static int __init synproxy_tg6_init(void)
static int synproxy_tg6_check(const struct xt_tgchk_param *par)
{
struct synproxy_net *snet = synproxy_pernet(par->net);
const struct ip6t_entry *e = par->entryinfo;
int err;
err = nf_register_hooks(ipv6_synproxy_ops,
ARRAY_SIZE(ipv6_synproxy_ops));
if (err < 0)
goto err1;
if (!(e->ipv6.flags & IP6T_F_PROTO) ||
e->ipv6.proto != IPPROTO_TCP ||
e->ipv6.invflags & XT_INV_PROTO)
return -EINVAL;
err = xt_register_target(&synproxy_tg6_reg);
if (err < 0)
goto err2;
err = nf_ct_netns_get(par->net, par->family);
if (err)
return err;
return 0;
if (snet->hook_ref6 == 0) {
err = nf_register_net_hooks(par->net, ipv6_synproxy_ops,
ARRAY_SIZE(ipv6_synproxy_ops));
if (err) {
nf_ct_netns_put(par->net, par->family);
return err;
}
}
err2:
nf_unregister_hooks(ipv6_synproxy_ops, ARRAY_SIZE(ipv6_synproxy_ops));
err1:
snet->hook_ref6++;
return err;
}
static void synproxy_tg6_destroy(const struct xt_tgdtor_param *par)
{
struct synproxy_net *snet = synproxy_pernet(par->net);
snet->hook_ref6--;
if (snet->hook_ref6 == 0)
nf_unregister_net_hooks(par->net, ipv6_synproxy_ops,
ARRAY_SIZE(ipv6_synproxy_ops));
nf_ct_netns_put(par->net, par->family);
}
static struct xt_target synproxy_tg6_reg __read_mostly = {
.name = "SYNPROXY",
.family = NFPROTO_IPV6,
.hooks = (1 << NF_INET_LOCAL_IN) | (1 << NF_INET_FORWARD),
.target = synproxy_tg6,
.targetsize = sizeof(struct xt_synproxy_info),
.checkentry = synproxy_tg6_check,
.destroy = synproxy_tg6_destroy,
.me = THIS_MODULE,
};
static int __init synproxy_tg6_init(void)
{
return xt_register_target(&synproxy_tg6_reg);
}
static void __exit synproxy_tg6_exit(void)
{
xt_unregister_target(&synproxy_tg6_reg);
nf_unregister_hooks(ipv6_synproxy_ops, ARRAY_SIZE(ipv6_synproxy_ops));
}
module_init(synproxy_tg6_init);

View File

@ -221,8 +221,7 @@ icmpv6_error(struct net *net, struct nf_conn *tmpl,
type = icmp6h->icmp6_type - 130;
if (type >= 0 && type < sizeof(noct_valid_new) &&
noct_valid_new[type]) {
nf_ct_set(skb, nf_ct_untracked_get(), IP_CT_NEW);
nf_conntrack_get(skb_nfct(skb));
nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
return NF_ACCEPT;
}

View File

@ -58,8 +58,7 @@ void nf_dup_ipv6(struct net *net, struct sk_buff *skb, unsigned int hooknum,
#if IS_ENABLED(CONFIG_NF_CONNTRACK)
nf_reset(skb);
nf_ct_set(skb, nf_ct_untracked_get(), IP_CT_NEW);
nf_conntrack_get(skb_nfct(skb));
nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
#endif
if (hooknum == NF_INET_PRE_ROUTING ||
hooknum == NF_INET_LOCAL_IN) {

View File

@ -273,13 +273,7 @@ nf_nat_ipv6_fn(void *priv, struct sk_buff *skb,
if (!ct)
return NF_ACCEPT;
/* Don't try to NAT if this packet is not conntracked */
if (nf_ct_is_untracked(ct))
return NF_ACCEPT;
nat = nf_ct_nat_ext_add(ct);
if (nat == NULL)
return NF_ACCEPT;
nat = nfct_nat(ct);
switch (ctinfo) {
case IP_CT_RELATED:

View File

@ -30,6 +30,7 @@ nf_nat_masquerade_ipv6(struct sk_buff *skb, const struct nf_nat_range *range,
const struct net_device *out)
{
enum ip_conntrack_info ctinfo;
struct nf_conn_nat *nat;
struct in6_addr src;
struct nf_conn *ct;
struct nf_nat_range newrange;
@ -42,7 +43,9 @@ nf_nat_masquerade_ipv6(struct sk_buff *skb, const struct nf_nat_range *range,
&ipv6_hdr(skb)->daddr, 0, &src) < 0)
return NF_DROP;
nfct_nat(ct)->masq_index = out->ifindex;
nat = nf_ct_nat_ext_add(ct);
if (nat)
nat->masq_index = out->ifindex;
newrange.flags = range->flags | NF_NAT_RANGE_MAP_IPS;
newrange.min_addr.in6 = src;

View File

@ -246,7 +246,7 @@ nft_fib6_select_ops(const struct nft_ctx *ctx,
static struct nft_expr_type nft_fib6_type __read_mostly = {
.name = "fib",
.select_ops = &nft_fib6_select_ops,
.select_ops = nft_fib6_select_ops,
.policy = nft_fib_policy,
.maxattr = NFTA_FIB_MAX,
.family = NFPROTO_IPV6,

View File

@ -126,14 +126,15 @@ int nf_register_net_hook(struct net *net, const struct nf_hook_ops *reg)
}
EXPORT_SYMBOL(nf_register_net_hook);
void nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg)
static struct nf_hook_entry *
__nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg)
{
struct nf_hook_entry __rcu **pp;
struct nf_hook_entry *p;
pp = nf_hook_entry_head(net, reg);
if (WARN_ON_ONCE(!pp))
return;
return NULL;
mutex_lock(&nf_hook_mutex);
for (; (p = nf_entry_dereference(*pp)) != NULL; pp = &p->next) {
@ -145,7 +146,7 @@ void nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg)
mutex_unlock(&nf_hook_mutex);
if (!p) {
WARN(1, "nf_unregister_net_hook: hook not found!\n");
return;
return NULL;
}
#ifdef CONFIG_NETFILTER_INGRESS
if (reg->pf == NFPROTO_NETDEV && reg->hooknum == NF_NETDEV_INGRESS)
@ -154,10 +155,24 @@ void nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg)
#ifdef HAVE_JUMP_LABEL
static_key_slow_dec(&nf_hooks_needed[reg->pf][reg->hooknum]);
#endif
return p;
}
void nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg)
{
struct nf_hook_entry *p = __nf_unregister_net_hook(net, reg);
unsigned int nfq;
if (!p)
return;
synchronize_net();
nf_queue_nf_hook_drop(net, p);
/* other cpu might still process nfqueue verdict that used reg */
synchronize_net();
nfq = nf_queue_nf_hook_drop(net);
if (nfq)
synchronize_net();
kfree(p);
}
EXPORT_SYMBOL(nf_unregister_net_hook);
@ -183,10 +198,32 @@ err:
EXPORT_SYMBOL(nf_register_net_hooks);
void nf_unregister_net_hooks(struct net *net, const struct nf_hook_ops *reg,
unsigned int n)
unsigned int hookcount)
{
while (n-- > 0)
nf_unregister_net_hook(net, &reg[n]);
struct nf_hook_entry *to_free[16];
unsigned int i, n, nfq;
do {
n = min_t(unsigned int, hookcount, ARRAY_SIZE(to_free));
for (i = 0; i < n; i++)
to_free[i] = __nf_unregister_net_hook(net, &reg[i]);
synchronize_net();
/* need 2nd synchronize_net() if nfqueue is used, skb
* can get reinjected right before nf_queue_hook_drop()
*/
nfq = nf_queue_nf_hook_drop(net);
if (nfq)
synchronize_net();
for (i = 0; i < n; i++)
kfree(to_free[i]);
reg += n;
hookcount -= n;
} while (hookcount > 0);
}
EXPORT_SYMBOL(nf_unregister_net_hooks);

View File

@ -232,7 +232,7 @@ mtype_list(const struct ip_set *set,
if (!test_bit(id, map->members) ||
(SET_WITH_TIMEOUT(set) &&
#ifdef IP_SET_BITMAP_STORED_TIMEOUT
mtype_is_filled((const struct mtype_elem *)x) &&
mtype_is_filled(x) &&
#endif
ip_set_timeout_expired(ext_timeout(x, set))))
continue;
@ -248,8 +248,7 @@ mtype_list(const struct ip_set *set,
}
if (mtype_do_list(skb, map, id, set->dsize))
goto nla_put_failure;
if (ip_set_put_extensions(skb, set, x,
mtype_is_filled((const struct mtype_elem *)x)))
if (ip_set_put_extensions(skb, set, x, mtype_is_filled(x)))
goto nla_put_failure;
ipset_nest_end(skb, nested);
}

View File

@ -502,14 +502,6 @@ __ip_set_put(struct ip_set *set)
/* set->ref can be swapped out by ip_set_swap, netlink events (like dump) need
* a separate reference counter
*/
static inline void
__ip_set_get_netlink(struct ip_set *set)
{
write_lock_bh(&ip_set_ref_lock);
set->ref_netlink++;
write_unlock_bh(&ip_set_ref_lock);
}
static inline void
__ip_set_put_netlink(struct ip_set *set)
{
@ -771,7 +763,7 @@ start_msg(struct sk_buff *skb, u32 portid, u32 seq, unsigned int flags,
struct nlmsghdr *nlh;
struct nfgenmsg *nfmsg;
nlh = nlmsg_put(skb, portid, seq, cmd | (NFNL_SUBSYS_IPSET << 8),
nlh = nlmsg_put(skb, portid, seq, nfnl_msg_type(NFNL_SUBSYS_IPSET, cmd),
sizeof(*nfmsg), flags);
if (!nlh)
return NULL;
@ -1916,7 +1908,7 @@ ip_set_sockfn_get(struct sock *sk, int optval, void __user *user, int *len)
ret = -EFAULT;
goto done;
}
op = (unsigned int *)data;
op = data;
if (*op < IP_SET_OP_VERSION) {
/* Check the version at the beginning of operations */
@ -2014,7 +2006,7 @@ static struct nf_sockopt_ops so_set __read_mostly = {
.pf = PF_INET,
.get_optmin = SO_IP_SET,
.get_optmax = SO_IP_SET + 1,
.get = &ip_set_sockfn_get,
.get = ip_set_sockfn_get,
.owner = THIS_MODULE,
};

View File

@ -2200,6 +2200,7 @@ static struct nf_hook_ops ip_vs_ops[] __read_mostly = {
static int __net_init __ip_vs_init(struct net *net)
{
struct netns_ipvs *ipvs;
int ret;
ipvs = net_generic(net, ip_vs_net_id);
if (ipvs == NULL)
@ -2231,11 +2232,17 @@ static int __net_init __ip_vs_init(struct net *net)
if (ip_vs_sync_net_init(ipvs) < 0)
goto sync_fail;
ret = nf_register_net_hooks(net, ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
if (ret < 0)
goto hook_fail;
return 0;
/*
* Error handling
*/
hook_fail:
ip_vs_sync_net_cleanup(ipvs);
sync_fail:
ip_vs_conn_net_cleanup(ipvs);
conn_fail:
@ -2255,6 +2262,7 @@ static void __net_exit __ip_vs_cleanup(struct net *net)
{
struct netns_ipvs *ipvs = net_ipvs(net);
nf_unregister_net_hooks(net, ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
ip_vs_service_net_cleanup(ipvs); /* ip_vs_flush() with locks */
ip_vs_conn_net_cleanup(ipvs);
ip_vs_app_net_cleanup(ipvs);
@ -2315,24 +2323,16 @@ static int __init ip_vs_init(void)
if (ret < 0)
goto cleanup_sub;
ret = nf_register_hooks(ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
if (ret < 0) {
pr_err("can't register hooks.\n");
goto cleanup_dev;
}
ret = ip_vs_register_nl_ioctl();
if (ret < 0) {
pr_err("can't register netlink/ioctl.\n");
goto cleanup_hooks;
goto cleanup_dev;
}
pr_info("ipvs loaded.\n");
return ret;
cleanup_hooks:
nf_unregister_hooks(ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
cleanup_dev:
unregister_pernet_device(&ipvs_core_dev_ops);
cleanup_sub:
@ -2349,7 +2349,6 @@ exit:
static void __exit ip_vs_cleanup(void)
{
ip_vs_unregister_nl_ioctl();
nf_unregister_hooks(ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
unregister_pernet_device(&ipvs_core_dev_ops);
unregister_pernet_subsys(&ipvs_core_ops); /* free ip_vs struct */
ip_vs_conn_cleanup();

View File

@ -1774,13 +1774,13 @@ static struct ctl_table vs_vars[] = {
.procname = "sync_version",
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &proc_do_sync_mode,
.proc_handler = proc_do_sync_mode,
},
{
.procname = "sync_ports",
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &proc_do_sync_ports,
.proc_handler = proc_do_sync_ports,
},
{
.procname = "sync_persist_mode",
@ -2130,8 +2130,8 @@ static int ip_vs_stats_show(struct seq_file *seq, void *v)
/* 01234567 01234567 01234567 0123456701234567 0123456701234567 */
seq_puts(seq,
" Total Incoming Outgoing Incoming Outgoing\n");
seq_printf(seq,
" Conns Packets Packets Bytes Bytes\n");
seq_puts(seq,
" Conns Packets Packets Bytes Bytes\n");
ip_vs_copy_stats(&show, &net_ipvs(net)->tot_stats);
seq_printf(seq, "%8LX %8LX %8LX %16LX %16LX\n\n",
@ -2178,8 +2178,8 @@ static int ip_vs_stats_percpu_show(struct seq_file *seq, void *v)
/* 01234567 01234567 01234567 0123456701234567 0123456701234567 */
seq_puts(seq,
" Total Incoming Outgoing Incoming Outgoing\n");
seq_printf(seq,
"CPU Conns Packets Packets Bytes Bytes\n");
seq_puts(seq,
"CPU Conns Packets Packets Bytes Bytes\n");
for_each_possible_cpu(i) {
struct ip_vs_cpu_stats *u = per_cpu_ptr(cpustats, i);

View File

@ -260,7 +260,9 @@ static int ip_vs_ftp_out(struct ip_vs_app *app, struct ip_vs_conn *cp,
buf_len = strlen(buf);
ct = nf_ct_get(skb, &ctinfo);
if (ct && !nf_ct_is_untracked(ct) && nfct_nat(ct)) {
if (ct && (ct->status & IPS_NAT_MASK)) {
bool mangled;
/* If mangling fails this function will return 0
* which will cause the packet to be dropped.
* Mangling can only fail under memory pressure,
@ -268,12 +270,13 @@ static int ip_vs_ftp_out(struct ip_vs_app *app, struct ip_vs_conn *cp,
* packet.
*/
rcu_read_lock();
ret = nf_nat_mangle_tcp_packet(skb, ct, ctinfo,
iph->ihl * 4,
start-data, end-start,
buf, buf_len);
mangled = nf_nat_mangle_tcp_packet(skb, ct, ctinfo,
iph->ihl * 4,
start - data,
end - start,
buf, buf_len);
rcu_read_unlock();
if (ret) {
if (mangled) {
ip_vs_nfct_expect_related(skb, ct, n_cp,
IPPROTO_TCP, 0, 0);
if (skb->ip_summed == CHECKSUM_COMPLETE)
@ -482,11 +485,8 @@ static struct pernet_operations ip_vs_ftp_ops = {
static int __init ip_vs_ftp_init(void)
{
int rv;
rv = register_pernet_subsys(&ip_vs_ftp_ops);
/* rcu_barrier() is called by netns on error */
return rv;
return register_pernet_subsys(&ip_vs_ftp_ops);
}
/*

View File

@ -85,7 +85,7 @@ ip_vs_update_conntrack(struct sk_buff *skb, struct ip_vs_conn *cp, int outin)
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
struct nf_conntrack_tuple new_tuple;
if (ct == NULL || nf_ct_is_confirmed(ct) || nf_ct_is_untracked(ct) ||
if (ct == NULL || nf_ct_is_confirmed(ct) ||
nf_ct_is_dying(ct))
return;
@ -232,7 +232,7 @@ void ip_vs_nfct_expect_related(struct sk_buff *skb, struct nf_conn *ct,
{
struct nf_conntrack_expect *exp;
if (ct == NULL || nf_ct_is_untracked(ct))
if (ct == NULL)
return;
exp = nf_ct_expect_alloc(ct);

View File

@ -193,28 +193,6 @@ ip_vs_create_timeout_table(int *table, int size)
}
/*
* Set timeout value for state specified by name
*/
int
ip_vs_set_state_timeout(int *table, int num, const char *const *names,
const char *name, int to)
{
int i;
if (!table || !name || !to)
return -EINVAL;
for (i = 0; i < num; i++) {
if (strcmp(names[i], name))
continue;
table[i] = to * HZ;
return 0;
}
return -ENOENT;
}
const char * ip_vs_state_name(__u16 proto, int state)
{
struct ip_vs_protocol *pp = ip_vs_proto_get(proto);

View File

@ -520,7 +520,7 @@ static int ip_vs_sync_conn_needed(struct netns_ipvs *ipvs,
if (!(cp->flags & IP_VS_CONN_F_TEMPLATE) &&
pkts % sync_period != sysctl_sync_threshold(ipvs))
return 0;
} else if (sync_refresh_period <= 0 &&
} else if (!sync_refresh_period &&
pkts != sysctl_sync_threshold(ipvs))
return 0;
@ -1849,7 +1849,7 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c,
if (state == IP_VS_STATE_MASTER) {
struct ipvs_master_sync_state *ms;
ipvs->ms = kzalloc(count * sizeof(ipvs->ms[0]), GFP_KERNEL);
ipvs->ms = kcalloc(count, sizeof(ipvs->ms[0]), GFP_KERNEL);
if (!ipvs->ms)
goto out;
ms = ipvs->ms;
@ -1862,7 +1862,7 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c,
ms->ipvs = ipvs;
}
} else {
array = kzalloc(count * sizeof(struct task_struct *),
array = kcalloc(count, sizeof(struct task_struct *),
GFP_KERNEL);
if (!array)
goto out;

View File

@ -775,7 +775,7 @@ ip_vs_nat_xmit(struct sk_buff *skb, struct ip_vs_conn *cp,
enum ip_conntrack_info ctinfo;
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
if (ct && !nf_ct_is_untracked(ct)) {
if (ct) {
IP_VS_DBG_RL_PKT(10, AF_INET, pp, skb, ipvsh->off,
"ip_vs_nat_xmit(): "
"stopping DNAT to local address");
@ -866,7 +866,7 @@ ip_vs_nat_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp,
enum ip_conntrack_info ctinfo;
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
if (ct && !nf_ct_is_untracked(ct)) {
if (ct) {
IP_VS_DBG_RL_PKT(10, AF_INET6, pp, skb, ipvsh->off,
"ip_vs_nat_xmit_v6(): "
"stopping DNAT to local address");
@ -1338,7 +1338,7 @@ ip_vs_icmp_xmit(struct sk_buff *skb, struct ip_vs_conn *cp,
enum ip_conntrack_info ctinfo;
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
if (ct && !nf_ct_is_untracked(ct)) {
if (ct) {
IP_VS_DBG(10, "%s(): "
"stopping DNAT to local address %pI4\n",
__func__, &cp->daddr.ip);
@ -1429,7 +1429,7 @@ ip_vs_icmp_xmit_v6(struct sk_buff *skb, struct ip_vs_conn *cp,
enum ip_conntrack_info ctinfo;
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
if (ct && !nf_ct_is_untracked(ct)) {
if (ct) {
IP_VS_DBG(10, "%s(): "
"stopping DNAT to local address %pI6\n",
__func__, &cp->daddr.in6);

View File

@ -55,7 +55,7 @@ seq_print_acct(struct seq_file *s, const struct nf_conn *ct, int dir)
};
EXPORT_SYMBOL_GPL(seq_print_acct);
static struct nf_ct_ext_type acct_extend __read_mostly = {
static const struct nf_ct_ext_type acct_extend = {
.len = sizeof(struct nf_conn_acct),
.align = __alignof__(struct nf_conn_acct),
.id = NF_CT_EXT_ACCT,

View File

@ -207,6 +207,8 @@ static int __init nf_conntrack_amanda_init(void)
{
int ret, i;
NF_CT_HELPER_BUILD_BUG_ON(0);
for (i = 0; i < ARRAY_SIZE(search); i++) {
search[i].ts = textsearch_prepare(ts_algo, search[i].string,
search[i].len,

View File

@ -76,6 +76,7 @@ struct conntrack_gc_work {
struct delayed_work dwork;
u32 last_bucket;
bool exiting;
bool early_drop;
long next_gc_run;
};
@ -180,14 +181,6 @@ EXPORT_SYMBOL_GPL(nf_conntrack_htable_size);
unsigned int nf_conntrack_max __read_mostly;
seqcount_t nf_conntrack_generation __read_mostly;
/* nf_conn must be 8 bytes aligned, as the 3 LSB bits are used
* for the nfctinfo. We cheat by (ab)using the PER CPU cache line
* alignment to enforce this.
*/
DEFINE_PER_CPU_ALIGNED(struct nf_conn, nf_conntrack_untracked);
EXPORT_PER_CPU_SYMBOL(nf_conntrack_untracked);
static unsigned int nf_conntrack_hash_rnd __read_mostly;
static u32 hash_conntrack_raw(const struct nf_conntrack_tuple *tuple,
@ -706,7 +699,7 @@ static int nf_ct_resolve_clash(struct net *net, struct sk_buff *skb,
l4proto = __nf_ct_l4proto_find(nf_ct_l3num(ct), nf_ct_protonum(ct));
if (l4proto->allow_clash &&
!nfct_nat(ct) &&
((ct->status & IPS_NAT_DONE_MASK) == 0) &&
!nf_ct_is_dying(ct) &&
atomic_inc_not_zero(&ct->ct_general.use)) {
enum ip_conntrack_info oldinfo;
@ -959,10 +952,30 @@ static noinline int early_drop(struct net *net, unsigned int _hash)
return false;
}
static bool gc_worker_skip_ct(const struct nf_conn *ct)
{
return !nf_ct_is_confirmed(ct) || nf_ct_is_dying(ct);
}
static bool gc_worker_can_early_drop(const struct nf_conn *ct)
{
const struct nf_conntrack_l4proto *l4proto;
if (!test_bit(IPS_ASSURED_BIT, &ct->status))
return true;
l4proto = __nf_ct_l4proto_find(nf_ct_l3num(ct), nf_ct_protonum(ct));
if (l4proto->can_early_drop && l4proto->can_early_drop(ct))
return true;
return false;
}
static void gc_worker(struct work_struct *work)
{
unsigned int min_interval = max(HZ / GC_MAX_BUCKETS_DIV, 1u);
unsigned int i, goal, buckets = 0, expired_count = 0;
unsigned int nf_conntrack_max95 = 0;
struct conntrack_gc_work *gc_work;
unsigned int ratio, scanned = 0;
unsigned long next_run;
@ -971,6 +984,8 @@ static void gc_worker(struct work_struct *work)
goal = nf_conntrack_htable_size / GC_MAX_BUCKETS_DIV;
i = gc_work->last_bucket;
if (gc_work->early_drop)
nf_conntrack_max95 = nf_conntrack_max / 100u * 95u;
do {
struct nf_conntrack_tuple_hash *h;
@ -987,6 +1002,8 @@ static void gc_worker(struct work_struct *work)
i = 0;
hlist_nulls_for_each_entry_rcu(h, n, &ct_hash[i], hnnode) {
struct net *net;
tmp = nf_ct_tuplehash_to_ctrack(h);
scanned++;
@ -995,6 +1012,27 @@ static void gc_worker(struct work_struct *work)
expired_count++;
continue;
}
if (nf_conntrack_max95 == 0 || gc_worker_skip_ct(tmp))
continue;
net = nf_ct_net(tmp);
if (atomic_read(&net->ct.count) < nf_conntrack_max95)
continue;
/* need to take reference to avoid possible races */
if (!atomic_inc_not_zero(&tmp->ct_general.use))
continue;
if (gc_worker_skip_ct(tmp)) {
nf_ct_put(tmp);
continue;
}
if (gc_worker_can_early_drop(tmp))
nf_ct_kill(tmp);
nf_ct_put(tmp);
}
/* could check get_nulls_value() here and restart if ct
@ -1040,6 +1078,7 @@ static void gc_worker(struct work_struct *work)
next_run = gc_work->next_gc_run;
gc_work->last_bucket = i;
gc_work->early_drop = false;
queue_delayed_work(system_long_wq, &gc_work->dwork, next_run);
}
@ -1065,6 +1104,8 @@ __nf_conntrack_alloc(struct net *net,
if (nf_conntrack_max &&
unlikely(atomic_read(&net->ct.count) > nf_conntrack_max)) {
if (!early_drop(net, hash)) {
if (!conntrack_gc_work.early_drop)
conntrack_gc_work.early_drop = true;
atomic_dec(&net->ct.count);
net_warn_ratelimited("nf_conntrack: table full, dropping packet\n");
return ERR_PTR(-ENOMEM);
@ -1314,9 +1355,10 @@ nf_conntrack_in(struct net *net, u_int8_t pf, unsigned int hooknum,
int ret;
tmpl = nf_ct_get(skb, &ctinfo);
if (tmpl) {
if (tmpl || ctinfo == IP_CT_UNTRACKED) {
/* Previously seen (loopback or untracked)? Ignore. */
if (!nf_ct_is_template(tmpl)) {
if ((tmpl && !nf_ct_is_template(tmpl)) ||
ctinfo == IP_CT_UNTRACKED) {
NF_CT_STAT_INC_ATOMIC(net, ignore);
return NF_ACCEPT;
}
@ -1629,18 +1671,6 @@ void nf_ct_free_hashtable(void *hash, unsigned int size)
}
EXPORT_SYMBOL_GPL(nf_ct_free_hashtable);
static int untrack_refs(void)
{
int cnt = 0, cpu;
for_each_possible_cpu(cpu) {
struct nf_conn *ct = &per_cpu(nf_conntrack_untracked, cpu);
cnt += atomic_read(&ct->ct_general.use) - 1;
}
return cnt;
}
void nf_conntrack_cleanup_start(void)
{
conntrack_gc_work.exiting = true;
@ -1650,8 +1680,6 @@ void nf_conntrack_cleanup_start(void)
void nf_conntrack_cleanup_end(void)
{
RCU_INIT_POINTER(nf_ct_destroy, NULL);
while (untrack_refs() > 0)
schedule();
cancel_delayed_work_sync(&conntrack_gc_work.dwork);
nf_ct_free_hashtable(nf_conntrack_hash, nf_conntrack_htable_size);
@ -1825,20 +1853,44 @@ EXPORT_SYMBOL_GPL(nf_conntrack_set_hashsize);
module_param_call(hashsize, nf_conntrack_set_hashsize, param_get_uint,
&nf_conntrack_htable_size, 0600);
void nf_ct_untracked_status_or(unsigned long bits)
static unsigned int total_extension_size(void)
{
int cpu;
/* remember to add new extensions below */
BUILD_BUG_ON(NF_CT_EXT_NUM > 9);
for_each_possible_cpu(cpu)
per_cpu(nf_conntrack_untracked, cpu).status |= bits;
}
EXPORT_SYMBOL_GPL(nf_ct_untracked_status_or);
return sizeof(struct nf_ct_ext) +
sizeof(struct nf_conn_help)
#if IS_ENABLED(CONFIG_NF_NAT)
+ sizeof(struct nf_conn_nat)
#endif
+ sizeof(struct nf_conn_seqadj)
+ sizeof(struct nf_conn_acct)
#ifdef CONFIG_NF_CONNTRACK_EVENTS
+ sizeof(struct nf_conntrack_ecache)
#endif
#ifdef CONFIG_NF_CONNTRACK_TIMESTAMP
+ sizeof(struct nf_conn_tstamp)
#endif
#ifdef CONFIG_NF_CONNTRACK_TIMEOUT
+ sizeof(struct nf_conn_timeout)
#endif
#ifdef CONFIG_NF_CONNTRACK_LABELS
+ sizeof(struct nf_conn_labels)
#endif
#if IS_ENABLED(CONFIG_NETFILTER_SYNPROXY)
+ sizeof(struct nf_conn_synproxy)
#endif
;
};
int nf_conntrack_init_start(void)
{
int max_factor = 8;
int ret = -ENOMEM;
int i, cpu;
int i;
/* struct nf_ct_ext uses u8 to store offsets/size */
BUILD_BUG_ON(total_extension_size() > 255u);
seqcount_init(&nf_conntrack_generation);
@ -1921,15 +1973,6 @@ int nf_conntrack_init_start(void)
if (ret < 0)
goto err_proto;
/* Set up fake conntrack: to never be deleted, not in any hashes */
for_each_possible_cpu(cpu) {
struct nf_conn *ct = &per_cpu(nf_conntrack_untracked, cpu);
write_pnet(&ct->ct_net, &init_net);
atomic_set(&ct->ct_general.use, 1);
}
/* - and look it like as a confirmed connection */
nf_ct_untracked_status_or(IPS_CONFIRMED | IPS_UNTRACKED);
conntrack_gc_work_init(&conntrack_gc_work);
queue_delayed_work(system_long_wq, &conntrack_gc_work.dwork, HZ);
@ -1977,6 +2020,7 @@ int nf_conntrack_init_net(struct net *net)
int ret = -ENOMEM;
int cpu;
BUILD_BUG_ON(IP_CT_UNTRACKED == IP_CT_NUMBER);
atomic_set(&net->ct.count, 0);
net->ct.pcpu_lists = alloc_percpu(struct ct_pcpu);

View File

@ -195,7 +195,7 @@ void nf_ct_deliver_cached_events(struct nf_conn *ct)
events = xchg(&e->cache, 0);
if (!nf_ct_is_confirmed(ct) || nf_ct_is_dying(ct) || !events)
if (!nf_ct_is_confirmed(ct) || nf_ct_is_dying(ct))
goto out_unlock;
/* We make a copy of the missed event cache without taking
@ -212,7 +212,7 @@ void nf_ct_deliver_cached_events(struct nf_conn *ct)
ret = notify->fcn(events | missed, &item);
if (likely(ret >= 0 && !missed))
if (likely(ret == 0 && !missed))
goto out_unlock;
spin_lock_bh(&ct->lock);
@ -347,7 +347,7 @@ static struct ctl_table event_sysctl_table[] = {
};
#endif /* CONFIG_SYSCTL */
static struct nf_ct_ext_type event_extend __read_mostly = {
static const struct nf_ct_ext_type event_extend = {
.len = sizeof(struct nf_conntrack_ecache),
.align = __alignof__(struct nf_conntrack_ecache),
.id = NF_CT_EXT_ECACHE,
@ -420,6 +420,9 @@ int nf_conntrack_ecache_init(void)
int ret = nf_ct_extend_register(&event_extend);
if (ret < 0)
pr_err("nf_ct_event: Unable to register event extension.\n");
BUILD_BUG_ON(__IPCT_MAX >= 16); /* ctmask, missed use u16 */
return ret;
}

View File

@ -103,6 +103,17 @@ nf_ct_exp_equal(const struct nf_conntrack_tuple *tuple,
nf_ct_zone_equal_any(i->master, zone);
}
bool nf_ct_remove_expect(struct nf_conntrack_expect *exp)
{
if (del_timer(&exp->timeout)) {
nf_ct_unlink_expect(exp);
nf_ct_expect_put(exp);
return true;
}
return false;
}
EXPORT_SYMBOL_GPL(nf_ct_remove_expect);
struct nf_conntrack_expect *
__nf_ct_expect_find(struct net *net,
const struct nf_conntrack_zone *zone,
@ -211,10 +222,7 @@ void nf_ct_remove_expectations(struct nf_conn *ct)
spin_lock_bh(&nf_conntrack_expect_lock);
hlist_for_each_entry_safe(exp, next, &help->expectations, lnode) {
if (del_timer(&exp->timeout)) {
nf_ct_unlink_expect(exp);
nf_ct_expect_put(exp);
}
nf_ct_remove_expect(exp);
}
spin_unlock_bh(&nf_conntrack_expect_lock);
}
@ -255,10 +263,7 @@ static inline int expect_matches(const struct nf_conntrack_expect *a,
void nf_ct_unexpect_related(struct nf_conntrack_expect *exp)
{
spin_lock_bh(&nf_conntrack_expect_lock);
if (del_timer(&exp->timeout)) {
nf_ct_unlink_expect(exp);
nf_ct_expect_put(exp);
}
nf_ct_remove_expect(exp);
spin_unlock_bh(&nf_conntrack_expect_lock);
}
EXPORT_SYMBOL_GPL(nf_ct_unexpect_related);
@ -394,10 +399,8 @@ static void evict_oldest_expect(struct nf_conn *master,
last = exp;
}
if (last && del_timer(&last->timeout)) {
nf_ct_unlink_expect(last);
nf_ct_expect_put(last);
}
if (last)
nf_ct_remove_expect(last);
}
static inline int __nf_ct_expect_check(struct nf_conntrack_expect *expect)
@ -419,11 +422,8 @@ static inline int __nf_ct_expect_check(struct nf_conntrack_expect *expect)
h = nf_ct_expect_dst_hash(net, &expect->tuple);
hlist_for_each_entry_safe(i, next, &nf_ct_expect_hash[h], hnode) {
if (expect_matches(i, expect)) {
if (del_timer(&i->timeout)) {
nf_ct_unlink_expect(i);
nf_ct_expect_put(i);
if (nf_ct_remove_expect(expect))
break;
}
} else if (expect_clash(i, expect)) {
ret = -EBUSY;
goto out;
@ -549,7 +549,7 @@ static int exp_seq_show(struct seq_file *s, void *v)
seq_printf(s, "%ld ", timer_pending(&expect->timeout)
? (long)(expect->timeout.expires - jiffies)/HZ : 0);
else
seq_printf(s, "- ");
seq_puts(s, "- ");
seq_printf(s, "l3proto = %u proto=%u ",
expect->tuple.src.l3num,
expect->tuple.dst.protonum);
@ -559,7 +559,7 @@ static int exp_seq_show(struct seq_file *s, void *v)
expect->tuple.dst.protonum));
if (expect->flags & NF_CT_EXPECT_PERMANENT) {
seq_printf(s, "PERMANENT");
seq_puts(s, "PERMANENT");
delim = ",";
}
if (expect->flags & NF_CT_EXPECT_INACTIVE) {

View File

@ -18,17 +18,14 @@
static struct nf_ct_ext_type __rcu *nf_ct_ext_types[NF_CT_EXT_NUM];
static DEFINE_MUTEX(nf_ct_ext_type_mutex);
#define NF_CT_EXT_PREALLOC 128u /* conntrack events are on by default */
void __nf_ct_ext_destroy(struct nf_conn *ct)
void nf_ct_ext_destroy(struct nf_conn *ct)
{
unsigned int i;
struct nf_ct_ext_type *t;
struct nf_ct_ext *ext = ct->ext;
for (i = 0; i < NF_CT_EXT_NUM; i++) {
if (!__nf_ct_ext_exist(ext, i))
continue;
rcu_read_lock();
t = rcu_dereference(nf_ct_ext_types[i]);
@ -41,54 +38,26 @@ void __nf_ct_ext_destroy(struct nf_conn *ct)
rcu_read_unlock();
}
}
EXPORT_SYMBOL(__nf_ct_ext_destroy);
EXPORT_SYMBOL(nf_ct_ext_destroy);
static void *
nf_ct_ext_create(struct nf_ct_ext **ext, enum nf_ct_ext_id id,
size_t var_alloc_len, gfp_t gfp)
{
unsigned int off, len;
struct nf_ct_ext_type *t;
size_t alloc_size;
rcu_read_lock();
t = rcu_dereference(nf_ct_ext_types[id]);
if (!t) {
rcu_read_unlock();
return NULL;
}
off = ALIGN(sizeof(struct nf_ct_ext), t->align);
len = off + t->len + var_alloc_len;
alloc_size = t->alloc_size + var_alloc_len;
rcu_read_unlock();
*ext = kzalloc(alloc_size, gfp);
if (!*ext)
return NULL;
(*ext)->offset[id] = off;
(*ext)->len = len;
return (void *)(*ext) + off;
}
void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id,
size_t var_alloc_len, gfp_t gfp)
void *nf_ct_ext_add(struct nf_conn *ct, enum nf_ct_ext_id id, gfp_t gfp)
{
unsigned int newlen, newoff, oldlen, alloc;
struct nf_ct_ext *old, *new;
int newlen, newoff;
struct nf_ct_ext_type *t;
/* Conntrack must not be confirmed to avoid races on reallocation. */
NF_CT_ASSERT(!nf_ct_is_confirmed(ct));
old = ct->ext;
if (!old)
return nf_ct_ext_create(&ct->ext, id, var_alloc_len, gfp);
if (__nf_ct_ext_exist(old, id))
return NULL;
if (old) {
if (__nf_ct_ext_exist(old, id))
return NULL;
oldlen = old->len;
} else {
oldlen = sizeof(*new);
}
rcu_read_lock();
t = rcu_dereference(nf_ct_ext_types[id]);
@ -97,15 +66,19 @@ void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id,
return NULL;
}
newoff = ALIGN(old->len, t->align);
newlen = newoff + t->len + var_alloc_len;
newoff = ALIGN(oldlen, t->align);
newlen = newoff + t->len;
rcu_read_unlock();
new = __krealloc(old, newlen, gfp);
alloc = max(newlen, NF_CT_EXT_PREALLOC);
new = __krealloc(old, alloc, gfp);
if (!new)
return NULL;
if (new != old) {
if (!old) {
memset(new->offset, 0, sizeof(new->offset));
ct->ext = new;
} else if (new != old) {
kfree_rcu(old, rcu);
rcu_assign_pointer(ct->ext, new);
}
@ -115,45 +88,10 @@ void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id,
memset((void *)new + newoff, 0, newlen - newoff);
return (void *)new + newoff;
}
EXPORT_SYMBOL(__nf_ct_ext_add_length);
static void update_alloc_size(struct nf_ct_ext_type *type)
{
int i, j;
struct nf_ct_ext_type *t1, *t2;
enum nf_ct_ext_id min = 0, max = NF_CT_EXT_NUM - 1;
/* unnecessary to update all types */
if ((type->flags & NF_CT_EXT_F_PREALLOC) == 0) {
min = type->id;
max = type->id;
}
/* This assumes that extended areas in conntrack for the types
whose NF_CT_EXT_F_PREALLOC bit set are allocated in order */
for (i = min; i <= max; i++) {
t1 = rcu_dereference_protected(nf_ct_ext_types[i],
lockdep_is_held(&nf_ct_ext_type_mutex));
if (!t1)
continue;
t1->alloc_size = ALIGN(sizeof(struct nf_ct_ext), t1->align) +
t1->len;
for (j = 0; j < NF_CT_EXT_NUM; j++) {
t2 = rcu_dereference_protected(nf_ct_ext_types[j],
lockdep_is_held(&nf_ct_ext_type_mutex));
if (t2 == NULL || t2 == t1 ||
(t2->flags & NF_CT_EXT_F_PREALLOC) == 0)
continue;
t1->alloc_size = ALIGN(t1->alloc_size, t2->align)
+ t2->len;
}
}
}
EXPORT_SYMBOL(nf_ct_ext_add);
/* This MUST be called in process context. */
int nf_ct_extend_register(struct nf_ct_ext_type *type)
int nf_ct_extend_register(const struct nf_ct_ext_type *type)
{
int ret = 0;
@ -163,12 +101,7 @@ int nf_ct_extend_register(struct nf_ct_ext_type *type)
goto out;
}
/* This ensures that nf_ct_ext_create() can allocate enough area
before updating alloc_size */
type->alloc_size = ALIGN(sizeof(struct nf_ct_ext), type->align)
+ type->len;
rcu_assign_pointer(nf_ct_ext_types[type->id], type);
update_alloc_size(type);
out:
mutex_unlock(&nf_ct_ext_type_mutex);
return ret;
@ -176,11 +109,10 @@ out:
EXPORT_SYMBOL_GPL(nf_ct_extend_register);
/* This MUST be called in process context. */
void nf_ct_extend_unregister(struct nf_ct_ext_type *type)
void nf_ct_extend_unregister(const struct nf_ct_ext_type *type)
{
mutex_lock(&nf_ct_ext_type_mutex);
RCU_INIT_POINTER(nf_ct_ext_types[type->id], NULL);
update_alloc_size(type);
mutex_unlock(&nf_ct_ext_type_mutex);
synchronize_rcu();
}

View File

@ -577,6 +577,8 @@ static int __init nf_conntrack_ftp_init(void)
{
int i, ret = 0;
NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_ftp_master));
ftp_buffer = kmalloc(65536, GFP_KERNEL);
if (!ftp_buffer)
return -ENOMEM;
@ -589,12 +591,10 @@ static int __init nf_conntrack_ftp_init(void)
for (i = 0; i < ports_c; i++) {
nf_ct_helper_init(&ftp[2 * i], AF_INET, IPPROTO_TCP, "ftp",
FTP_PORT, ports[i], ports[i], &ftp_exp_policy,
0, sizeof(struct nf_ct_ftp_master), help,
nf_ct_ftp_from_nlattr, THIS_MODULE);
0, help, nf_ct_ftp_from_nlattr, THIS_MODULE);
nf_ct_helper_init(&ftp[2 * i + 1], AF_INET6, IPPROTO_TCP, "ftp",
FTP_PORT, ports[i], ports[i], &ftp_exp_policy,
0, sizeof(struct nf_ct_ftp_master), help,
nf_ct_ftp_from_nlattr, THIS_MODULE);
0, help, nf_ct_ftp_from_nlattr, THIS_MODULE);
}
ret = nf_conntrack_helpers_register(ftp, ports_c * 2);

View File

@ -637,7 +637,6 @@ static const struct nf_conntrack_expect_policy h245_exp_policy = {
static struct nf_conntrack_helper nf_conntrack_helper_h245 __read_mostly = {
.name = "H.245",
.me = THIS_MODULE,
.data_len = sizeof(struct nf_ct_h323_master),
.tuple.src.l3num = AF_UNSPEC,
.tuple.dst.protonum = IPPROTO_UDP,
.help = h245_help,
@ -1215,7 +1214,6 @@ static struct nf_conntrack_helper nf_conntrack_helper_q931[] __read_mostly = {
{
.name = "Q.931",
.me = THIS_MODULE,
.data_len = sizeof(struct nf_ct_h323_master),
.tuple.src.l3num = AF_INET,
.tuple.src.u.tcp.port = cpu_to_be16(Q931_PORT),
.tuple.dst.protonum = IPPROTO_TCP,
@ -1800,7 +1798,6 @@ static struct nf_conntrack_helper nf_conntrack_helper_ras[] __read_mostly = {
{
.name = "RAS",
.me = THIS_MODULE,
.data_len = sizeof(struct nf_ct_h323_master),
.tuple.src.l3num = AF_INET,
.tuple.src.u.udp.port = cpu_to_be16(RAS_PORT),
.tuple.dst.protonum = IPPROTO_UDP,
@ -1810,7 +1807,6 @@ static struct nf_conntrack_helper nf_conntrack_helper_ras[] __read_mostly = {
{
.name = "RAS",
.me = THIS_MODULE,
.data_len = sizeof(struct nf_ct_h323_master),
.tuple.src.l3num = AF_INET6,
.tuple.src.u.udp.port = cpu_to_be16(RAS_PORT),
.tuple.dst.protonum = IPPROTO_UDP,
@ -1836,6 +1832,8 @@ static int __init nf_conntrack_h323_init(void)
{
int ret;
NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_h323_master));
h323_buffer = kmalloc(65536, GFP_KERNEL);
if (!h323_buffer)
return -ENOMEM;

View File

@ -187,8 +187,7 @@ nf_ct_helper_ext_add(struct nf_conn *ct,
{
struct nf_conn_help *help;
help = nf_ct_ext_add_length(ct, NF_CT_EXT_HELPER,
helper->data_len, gfp);
help = nf_ct_ext_add(ct, NF_CT_EXT_HELPER, gfp);
if (help)
INIT_HLIST_HEAD(&help->expectations);
else
@ -392,6 +391,9 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me)
BUG_ON(me->expect_class_max >= NF_CT_MAX_EXPECT_CLASSES);
BUG_ON(strlen(me->name) > NF_CT_HELPER_NAME_LEN - 1);
if (me->expect_policy->max_expected > NF_CT_EXPECT_MAX_CNT)
return -EINVAL;
mutex_lock(&nf_ct_helper_mutex);
hlist_for_each_entry(cur, &nf_ct_helper_hash[h], hnode) {
if (nf_ct_tuple_src_mask_cmp(&cur->tuple, &me->tuple, &mask)) {
@ -455,11 +457,8 @@ void nf_conntrack_helper_unregister(struct nf_conntrack_helper *me)
if ((rcu_dereference_protected(
help->helper,
lockdep_is_held(&nf_conntrack_expect_lock)
) == me || exp->helper == me) &&
del_timer(&exp->timeout)) {
nf_ct_unlink_expect(exp);
nf_ct_expect_put(exp);
}
) == me || exp->helper == me))
nf_ct_remove_expect(exp);
}
}
spin_unlock_bh(&nf_conntrack_expect_lock);
@ -491,7 +490,7 @@ void nf_ct_helper_init(struct nf_conntrack_helper *helper,
u16 l3num, u16 protonum, const char *name,
u16 default_port, u16 spec_port, u32 id,
const struct nf_conntrack_expect_policy *exp_pol,
u32 expect_class_max, u32 data_len,
u32 expect_class_max,
int (*help)(struct sk_buff *skb, unsigned int protoff,
struct nf_conn *ct,
enum ip_conntrack_info ctinfo),
@ -504,7 +503,6 @@ void nf_ct_helper_init(struct nf_conntrack_helper *helper,
helper->tuple.src.u.all = htons(spec_port);
helper->expect_policy = exp_pol;
helper->expect_class_max = expect_class_max;
helper->data_len = data_len;
helper->help = help;
helper->from_nlattr = from_nlattr;
helper->me = module;
@ -544,7 +542,7 @@ void nf_conntrack_helpers_unregister(struct nf_conntrack_helper *helper,
}
EXPORT_SYMBOL_GPL(nf_conntrack_helpers_unregister);
static struct nf_ct_ext_type helper_extend __read_mostly = {
static const struct nf_ct_ext_type helper_extend = {
.len = sizeof(struct nf_conn_help),
.align = __alignof__(struct nf_conn_help),
.id = NF_CT_EXT_HELPER,

View File

@ -243,6 +243,12 @@ static int __init nf_conntrack_irc_init(void)
return -EINVAL;
}
if (max_dcc_channels > NF_CT_EXPECT_MAX_CNT) {
pr_err("max_dcc_channels must not be more than %u\n",
NF_CT_EXPECT_MAX_CNT);
return -EINVAL;
}
irc_exp_policy.max_expected = max_dcc_channels;
irc_exp_policy.timeout = dcc_timeout;
@ -257,7 +263,7 @@ static int __init nf_conntrack_irc_init(void)
for (i = 0; i < ports_c; i++) {
nf_ct_helper_init(&irc[i], AF_INET, IPPROTO_TCP, "irc",
IRC_PORT, ports[i], i, &irc_exp_policy,
0, 0, help, NULL, THIS_MODULE);
0, help, NULL, THIS_MODULE);
}
ret = nf_conntrack_helpers_register(&irc[0], ports_c);

View File

@ -82,7 +82,7 @@ void nf_connlabels_put(struct net *net)
}
EXPORT_SYMBOL_GPL(nf_connlabels_put);
static struct nf_ct_ext_type labels_extend __read_mostly = {
static const struct nf_ct_ext_type labels_extend = {
.len = sizeof(struct nf_conn_labels),
.align = __alignof__(struct nf_conn_labels),
.id = NF_CT_EXT_LABELS,

View File

@ -58,6 +58,8 @@ static struct nf_conntrack_helper helper __read_mostly = {
static int __init nf_conntrack_netbios_ns_init(void)
{
NF_CT_HELPER_BUILD_BUG_ON(0);
exp_policy.timeout = timeout;
return nf_conntrack_helper_register(&helper);
}

View File

@ -467,7 +467,7 @@ ctnetlink_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
struct nlattr *nest_parms;
unsigned int flags = portid ? NLM_F_MULTI : 0, event;
event = (NFNL_SUBSYS_CTNETLINK << 8 | IPCTNL_MSG_CT_NEW);
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK, IPCTNL_MSG_CT_NEW);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;
@ -627,10 +627,6 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
unsigned int flags = 0, group;
int err;
/* ignore our fake conntrack entry */
if (nf_ct_is_untracked(ct))
return 0;
if (events & (1 << IPCT_DESTROY)) {
type = IPCTNL_MSG_CT_DELETE;
group = NFNLGRP_CONNTRACK_DESTROY;
@ -652,7 +648,7 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
if (skb == NULL)
goto errout;
type |= NFNL_SUBSYS_CTNETLINK << 8;
type = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK, type);
nlh = nlmsg_put(skb, item->portid, 0, type, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;
@ -1991,7 +1987,8 @@ ctnetlink_ct_stat_cpu_fill_info(struct sk_buff *skb, u32 portid, u32 seq,
struct nfgenmsg *nfmsg;
unsigned int flags = portid ? NLM_F_MULTI : 0, event;
event = (NFNL_SUBSYS_CTNETLINK << 8 | IPCTNL_MSG_CT_GET_STATS_CPU);
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK,
IPCTNL_MSG_CT_GET_STATS_CPU);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;
@ -2074,7 +2071,7 @@ ctnetlink_stat_ct_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
unsigned int flags = portid ? NLM_F_MULTI : 0, event;
unsigned int nr_conntracks = atomic_read(&net->ct.count);
event = (NFNL_SUBSYS_CTNETLINK << 8 | IPCTNL_MSG_CT_GET_STATS);
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK, IPCTNL_MSG_CT_GET_STATS);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;
@ -2180,13 +2177,7 @@ ctnetlink_glue_build_size(const struct nf_conn *ct)
static struct nf_conn *ctnetlink_glue_get_ct(const struct sk_buff *skb,
enum ip_conntrack_info *ctinfo)
{
struct nf_conn *ct;
ct = nf_ct_get(skb, ctinfo);
if (ct && nf_ct_is_untracked(ct))
ct = NULL;
return ct;
return nf_ct_get(skb, ctinfo);
}
static int __ctnetlink_glue_build(struct sk_buff *skb, struct nf_conn *ct)
@ -2585,7 +2576,7 @@ ctnetlink_exp_fill_info(struct sk_buff *skb, u32 portid, u32 seq,
struct nfgenmsg *nfmsg;
unsigned int flags = portid ? NLM_F_MULTI : 0;
event |= NFNL_SUBSYS_CTNETLINK_EXP << 8;
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_EXP, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;
@ -2636,7 +2627,7 @@ ctnetlink_expect_event(unsigned int events, struct nf_exp_event *item)
if (skb == NULL)
goto errout;
type |= NFNL_SUBSYS_CTNETLINK_EXP << 8;
type = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_EXP, type);
nlh = nlmsg_put(skb, item->portid, 0, type, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;
@ -3054,6 +3045,10 @@ ctnetlink_alloc_expect(const struct nlattr * const cda[], struct nf_conn *ct,
struct nf_conn_help *help;
int err;
help = nfct_help(ct);
if (!help)
return ERR_PTR(-EOPNOTSUPP);
if (cda[CTA_EXPECT_CLASS] && helper) {
class = ntohl(nla_get_be32(cda[CTA_EXPECT_CLASS]));
if (class > helper->expect_class_max)
@ -3063,26 +3058,11 @@ ctnetlink_alloc_expect(const struct nlattr * const cda[], struct nf_conn *ct,
if (!exp)
return ERR_PTR(-ENOMEM);
help = nfct_help(ct);
if (!help) {
if (!cda[CTA_EXPECT_TIMEOUT]) {
err = -EINVAL;
goto err_out;
}
exp->timeout.expires =
jiffies + ntohl(nla_get_be32(cda[CTA_EXPECT_TIMEOUT])) * HZ;
exp->flags = NF_CT_EXPECT_USERSPACE;
if (cda[CTA_EXPECT_FLAGS]) {
exp->flags |=
ntohl(nla_get_be32(cda[CTA_EXPECT_FLAGS]));
}
if (cda[CTA_EXPECT_FLAGS]) {
exp->flags = ntohl(nla_get_be32(cda[CTA_EXPECT_FLAGS]));
exp->flags &= ~NF_CT_EXPECT_USERSPACE;
} else {
if (cda[CTA_EXPECT_FLAGS]) {
exp->flags = ntohl(nla_get_be32(cda[CTA_EXPECT_FLAGS]));
exp->flags &= ~NF_CT_EXPECT_USERSPACE;
} else
exp->flags = 0;
exp->flags = 0;
}
if (cda[CTA_EXPECT_FN]) {
const char *name = nla_data(cda[CTA_EXPECT_FN]);
@ -3245,7 +3225,8 @@ ctnetlink_exp_stat_fill_info(struct sk_buff *skb, u32 portid, u32 seq, int cpu,
struct nfgenmsg *nfmsg;
unsigned int flags = portid ? NLM_F_MULTI : 0, event;
event = (NFNL_SUBSYS_CTNETLINK << 8 | IPCTNL_MSG_EXP_GET_STATS_CPU);
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK,
IPCTNL_MSG_EXP_GET_STATS_CPU);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;

View File

@ -263,7 +263,7 @@ out_unexpect_orig:
goto out_put_both;
}
static inline int
static int
pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff,
struct PptpControlHeader *ctlh,
union pptp_ctrl_union *pptpReq,
@ -391,7 +391,7 @@ invalid:
return NF_ACCEPT;
}
static inline int
static int
pptp_outbound_pkt(struct sk_buff *skb, unsigned int protoff,
struct PptpControlHeader *ctlh,
union pptp_ctrl_union *pptpReq,
@ -523,6 +523,14 @@ conntrack_pptp_help(struct sk_buff *skb, unsigned int protoff,
int ret;
u_int16_t msg;
#if IS_ENABLED(CONFIG_NF_NAT)
if (!nf_ct_is_confirmed(ct) && (ct->status & IPS_NAT_MASK)) {
struct nf_conn_nat *nat = nf_ct_ext_find(ct, NF_CT_EXT_NAT);
if (!nat && !nf_ct_ext_add(ct, NF_CT_EXT_NAT, GFP_ATOMIC))
return NF_DROP;
}
#endif
/* don't do any tracking before tcp handshake complete */
if (ctinfo != IP_CT_ESTABLISHED && ctinfo != IP_CT_ESTABLISHED_REPLY)
return NF_ACCEPT;
@ -596,7 +604,6 @@ static const struct nf_conntrack_expect_policy pptp_exp_policy = {
static struct nf_conntrack_helper pptp __read_mostly = {
.name = "pptp",
.me = THIS_MODULE,
.data_len = sizeof(struct nf_ct_pptp_master),
.tuple.src.l3num = AF_INET,
.tuple.src.u.tcp.port = cpu_to_be16(PPTP_CONTROL_PORT),
.tuple.dst.protonum = IPPROTO_TCP,
@ -607,6 +614,8 @@ static struct nf_conntrack_helper pptp __read_mostly = {
static int __init nf_conntrack_pptp_init(void)
{
NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_pptp_master));
return nf_conntrack_helper_register(&pptp);
}

View File

@ -202,7 +202,7 @@ static int kill_l3proto(struct nf_conn *i, void *data)
static int kill_l4proto(struct nf_conn *i, void *data)
{
struct nf_conntrack_l4proto *l4proto;
l4proto = (struct nf_conntrack_l4proto *)data;
l4proto = data;
return nf_ct_protonum(i) == l4proto->l4proto &&
nf_ct_l3num(i) == l4proto->l3proto;
}
@ -441,9 +441,8 @@ EXPORT_SYMBOL_GPL(nf_ct_l4proto_unregister_one);
void nf_ct_l4proto_pernet_unregister_one(struct net *net,
struct nf_conntrack_l4proto *l4proto)
{
struct nf_proto_net *pn = NULL;
struct nf_proto_net *pn = nf_ct_l4proto_net(net, l4proto);
pn = nf_ct_l4proto_net(net, l4proto);
if (pn == NULL)
return;

View File

@ -609,6 +609,20 @@ out_invalid:
return -NF_ACCEPT;
}
static bool dccp_can_early_drop(const struct nf_conn *ct)
{
switch (ct->proto.dccp.state) {
case CT_DCCP_CLOSEREQ:
case CT_DCCP_CLOSING:
case CT_DCCP_TIMEWAIT:
return true;
default:
break;
}
return false;
}
static void dccp_print_tuple(struct seq_file *s,
const struct nf_conntrack_tuple *tuple)
{
@ -868,6 +882,7 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_dccp4 __read_mostly = {
.packet = dccp_packet,
.get_timeouts = dccp_get_timeouts,
.error = dccp_error,
.can_early_drop = dccp_can_early_drop,
.print_tuple = dccp_print_tuple,
.print_conntrack = dccp_print_conntrack,
#if IS_ENABLED(CONFIG_NF_CT_NETLINK)
@ -902,6 +917,7 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_dccp6 __read_mostly = {
.packet = dccp_packet,
.get_timeouts = dccp_get_timeouts,
.error = dccp_error,
.can_early_drop = dccp_can_early_drop,
.print_tuple = dccp_print_tuple,
.print_conntrack = dccp_print_conntrack,
#if IS_ENABLED(CONFIG_NF_CT_NETLINK)

View File

@ -535,6 +535,20 @@ out_invalid:
return -NF_ACCEPT;
}
static bool sctp_can_early_drop(const struct nf_conn *ct)
{
switch (ct->proto.sctp.state) {
case SCTP_CONNTRACK_SHUTDOWN_SENT:
case SCTP_CONNTRACK_SHUTDOWN_RECD:
case SCTP_CONNTRACK_SHUTDOWN_ACK_SENT:
return true;
default:
break;
}
return false;
}
#if IS_ENABLED(CONFIG_NF_CT_NETLINK)
#include <linux/netfilter/nfnetlink.h>
@ -781,6 +795,7 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp4 __read_mostly = {
.get_timeouts = sctp_get_timeouts,
.new = sctp_new,
.error = sctp_error,
.can_early_drop = sctp_can_early_drop,
.me = THIS_MODULE,
#if IS_ENABLED(CONFIG_NF_CT_NETLINK)
.to_nlattr = sctp_to_nlattr,
@ -816,6 +831,7 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 __read_mostly = {
.get_timeouts = sctp_get_timeouts,
.new = sctp_new,
.error = sctp_error,
.can_early_drop = sctp_can_early_drop,
.me = THIS_MODULE,
#if IS_ENABLED(CONFIG_NF_CT_NETLINK)
.to_nlattr = sctp_to_nlattr,

View File

@ -419,10 +419,9 @@ static void tcp_options(const struct sk_buff *skb,
&& opsize == TCPOLEN_WINDOW) {
state->td_scale = *(u_int8_t *)ptr;
if (state->td_scale > 14) {
/* See RFC1323 */
state->td_scale = 14;
}
if (state->td_scale > TCP_MAX_WSCALE)
state->td_scale = TCP_MAX_WSCALE;
state->flags |=
IP_CT_TCP_FLAG_WINDOW_SCALE;
}
@ -1172,6 +1171,22 @@ static bool tcp_new(struct nf_conn *ct, const struct sk_buff *skb,
return true;
}
static bool tcp_can_early_drop(const struct nf_conn *ct)
{
switch (ct->proto.tcp.state) {
case TCP_CONNTRACK_FIN_WAIT:
case TCP_CONNTRACK_LAST_ACK:
case TCP_CONNTRACK_TIME_WAIT:
case TCP_CONNTRACK_CLOSE:
case TCP_CONNTRACK_CLOSE_WAIT:
return true;
default:
break;
}
return false;
}
#if IS_ENABLED(CONFIG_NF_CT_NETLINK)
#include <linux/netfilter/nfnetlink.h>
@ -1550,6 +1565,7 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp4 __read_mostly =
.get_timeouts = tcp_get_timeouts,
.new = tcp_new,
.error = tcp_error,
.can_early_drop = tcp_can_early_drop,
#if IS_ENABLED(CONFIG_NF_CT_NETLINK)
.to_nlattr = tcp_to_nlattr,
.nlattr_size = tcp_nlattr_size,
@ -1587,6 +1603,7 @@ struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp6 __read_mostly =
.get_timeouts = tcp_get_timeouts,
.new = tcp_new,
.error = tcp_error,
.can_early_drop = tcp_can_early_drop,
#if IS_ENABLED(CONFIG_NF_CT_NETLINK)
.to_nlattr = tcp_to_nlattr,
.nlattr_size = tcp_nlattr_size,

View File

@ -184,6 +184,8 @@ static int __init nf_conntrack_sane_init(void)
{
int i, ret = 0;
NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_sane_master));
sane_buffer = kmalloc(65536, GFP_KERNEL);
if (!sane_buffer)
return -ENOMEM;
@ -196,13 +198,11 @@ static int __init nf_conntrack_sane_init(void)
for (i = 0; i < ports_c; i++) {
nf_ct_helper_init(&sane[2 * i], AF_INET, IPPROTO_TCP, "sane",
SANE_PORT, ports[i], ports[i],
&sane_exp_policy, 0,
sizeof(struct nf_ct_sane_master), help, NULL,
&sane_exp_policy, 0, help, NULL,
THIS_MODULE);
nf_ct_helper_init(&sane[2 * i + 1], AF_INET6, IPPROTO_TCP, "sane",
SANE_PORT, ports[i], ports[i],
&sane_exp_policy, 0,
sizeof(struct nf_ct_sane_master), help, NULL,
&sane_exp_policy, 0, help, NULL,
THIS_MODULE);
}

View File

@ -231,7 +231,7 @@ s32 nf_ct_seq_offset(const struct nf_conn *ct,
}
EXPORT_SYMBOL_GPL(nf_ct_seq_offset);
static struct nf_ct_ext_type nf_ct_seqadj_extend __read_mostly = {
static const struct nf_ct_ext_type nf_ct_seqadj_extend = {
.len = sizeof(struct nf_conn_seqadj),
.align = __alignof__(struct nf_conn_seqadj),
.id = NF_CT_EXT_SEQADJ,

View File

@ -829,10 +829,8 @@ static void flush_expectations(struct nf_conn *ct, bool media)
hlist_for_each_entry_safe(exp, next, &help->expectations, lnode) {
if ((exp->class != SIP_EXPECT_SIGNALLING) ^ media)
continue;
if (!del_timer(&exp->timeout))
if (!nf_ct_remove_expect(exp))
continue;
nf_ct_unlink_expect(exp);
nf_ct_expect_put(exp);
if (!media)
break;
}
@ -1624,29 +1622,27 @@ static int __init nf_conntrack_sip_init(void)
{
int i, ret;
NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_sip_master));
if (ports_c == 0)
ports[ports_c++] = SIP_PORT;
for (i = 0; i < ports_c; i++) {
nf_ct_helper_init(&sip[4 * i], AF_INET, IPPROTO_UDP, "sip",
SIP_PORT, ports[i], i, sip_exp_policy,
SIP_EXPECT_MAX,
sizeof(struct nf_ct_sip_master), sip_help_udp,
SIP_EXPECT_MAX, sip_help_udp,
NULL, THIS_MODULE);
nf_ct_helper_init(&sip[4 * i + 1], AF_INET, IPPROTO_TCP, "sip",
SIP_PORT, ports[i], i, sip_exp_policy,
SIP_EXPECT_MAX,
sizeof(struct nf_ct_sip_master), sip_help_tcp,
SIP_EXPECT_MAX, sip_help_tcp,
NULL, THIS_MODULE);
nf_ct_helper_init(&sip[4 * i + 2], AF_INET6, IPPROTO_UDP, "sip",
SIP_PORT, ports[i], i, sip_exp_policy,
SIP_EXPECT_MAX,
sizeof(struct nf_ct_sip_master), sip_help_udp,
SIP_EXPECT_MAX, sip_help_udp,
NULL, THIS_MODULE);
nf_ct_helper_init(&sip[4 * i + 3], AF_INET6, IPPROTO_TCP, "sip",
SIP_PORT, ports[i], i, sip_exp_policy,
SIP_EXPECT_MAX,
sizeof(struct nf_ct_sip_master), sip_help_tcp,
SIP_EXPECT_MAX, sip_help_tcp,
NULL, THIS_MODULE);
}

View File

@ -250,7 +250,7 @@ static int ct_seq_show(struct seq_file *s, void *v)
goto release;
if (!(test_bit(IPS_SEEN_REPLY_BIT, &ct->status)))
seq_printf(s, "[UNREPLIED] ");
seq_puts(s, "[UNREPLIED] ");
print_tuple(s, &ct->tuplehash[IP_CT_DIR_REPLY].tuple,
l3proto, l4proto);
@ -261,7 +261,7 @@ static int ct_seq_show(struct seq_file *s, void *v)
goto release;
if (test_bit(IPS_ASSURED_BIT, &ct->status))
seq_printf(s, "[ASSURED] ");
seq_puts(s, "[ASSURED] ");
if (seq_has_overflowed(s))
goto release;
@ -350,7 +350,7 @@ static int ct_cpu_seq_show(struct seq_file *seq, void *v)
const struct ip_conntrack_stat *st = v;
if (v == SEQ_START_TOKEN) {
seq_printf(seq, "entries searched found new invalid ignore delete delete_list insert insert_failed drop early_drop icmp_error expect_new expect_create expect_delete search_restart\n");
seq_puts(seq, "entries searched found new invalid ignore delete delete_list insert insert_failed drop early_drop icmp_error expect_new expect_create expect_delete search_restart\n");
return 0;
}

View File

@ -113,16 +113,18 @@ static int __init nf_conntrack_tftp_init(void)
{
int i, ret;
NF_CT_HELPER_BUILD_BUG_ON(0);
if (ports_c == 0)
ports[ports_c++] = TFTP_PORT;
for (i = 0; i < ports_c; i++) {
nf_ct_helper_init(&tftp[2 * i], AF_INET, IPPROTO_UDP, "tftp",
TFTP_PORT, ports[i], i, &tftp_exp_policy,
0, 0, tftp_help, NULL, THIS_MODULE);
0, tftp_help, NULL, THIS_MODULE);
nf_ct_helper_init(&tftp[2 * i + 1], AF_INET6, IPPROTO_UDP, "tftp",
TFTP_PORT, ports[i], i, &tftp_exp_policy,
0, 0, tftp_help, NULL, THIS_MODULE);
0, tftp_help, NULL, THIS_MODULE);
}
ret = nf_conntrack_helpers_register(tftp, ports_c * 2);

View File

@ -31,7 +31,7 @@ EXPORT_SYMBOL_GPL(nf_ct_timeout_find_get_hook);
void (*nf_ct_timeout_put_hook)(struct ctnl_timeout *timeout) __read_mostly;
EXPORT_SYMBOL_GPL(nf_ct_timeout_put_hook);
static struct nf_ct_ext_type timeout_extend __read_mostly = {
static const struct nf_ct_ext_type timeout_extend = {
.len = sizeof(struct nf_conn_timeout),
.align = __alignof__(struct nf_conn_timeout),
.id = NF_CT_EXT_TIMEOUT,

View File

@ -33,7 +33,7 @@ static struct ctl_table tstamp_sysctl_table[] = {
};
#endif /* CONFIG_SYSCTL */
static struct nf_ct_ext_type tstamp_extend __read_mostly = {
static const struct nf_ct_ext_type tstamp_extend = {
.len = sizeof(struct nf_conn_tstamp),
.align = __alignof__(struct nf_conn_tstamp),
.id = NF_CT_EXT_TSTAMP,

View File

@ -14,7 +14,7 @@
/* nf_queue.c */
int nf_queue(struct sk_buff *skb, struct nf_hook_state *state,
struct nf_hook_entry **entryp, unsigned int verdict);
void nf_queue_nf_hook_drop(struct net *net, const struct nf_hook_entry *entry);
unsigned int nf_queue_nf_hook_drop(struct net *net);
int __init netfilter_queue_init(void);
/* nf_log.c */

View File

@ -71,7 +71,6 @@ void nf_log_unset(struct net *net, const struct nf_logger *logger)
RCU_INIT_POINTER(net->nf.nf_loggers[i], NULL);
}
mutex_unlock(&nf_log_mutex);
synchronize_rcu();
}
EXPORT_SYMBOL(nf_log_unset);
@ -376,13 +375,13 @@ static int seq_show(struct seq_file *s, void *v)
logger = nft_log_dereference(loggers[*pos][i]);
seq_printf(s, "%s", logger->name);
if (i == 0 && loggers[*pos][i + 1] != NULL)
seq_printf(s, ",");
seq_puts(s, ",");
if (seq_has_overflowed(s))
return -ENOSPC;
}
seq_printf(s, ")\n");
seq_puts(s, ")\n");
if (seq_has_overflowed(s))
return -ENOSPC;

View File

@ -33,7 +33,6 @@ static unsigned int help(struct sk_buff *skb,
{
char buffer[sizeof("65535")];
u_int16_t port;
unsigned int ret;
/* Connection comes from client. */
exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port;
@ -63,14 +62,14 @@ static unsigned int help(struct sk_buff *skb,
}
sprintf(buffer, "%u", port);
ret = nf_nat_mangle_udp_packet(skb, exp->master, ctinfo,
protoff, matchoff, matchlen,
buffer, strlen(buffer));
if (ret != NF_ACCEPT) {
if (!nf_nat_mangle_udp_packet(skb, exp->master, ctinfo,
protoff, matchoff, matchlen,
buffer, strlen(buffer))) {
nf_ct_helper_log(skb, exp->master, "cannot mangle packet");
nf_ct_unexpect_related(exp);
return NF_DROP;
}
return ret;
return NF_ACCEPT;
}
static void __exit nf_nat_amanda_fini(void)

View File

@ -71,11 +71,10 @@ static void __nf_nat_decode_session(struct sk_buff *skb, struct flowi *fl)
if (ct == NULL)
return;
family = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.l3num;
rcu_read_lock();
family = nf_ct_l3num(ct);
l3proto = __nf_nat_l3proto_find(family);
if (l3proto == NULL)
goto out;
return;
dir = CTINFO2DIR(ctinfo);
if (dir == IP_CT_DIR_ORIGINAL)
@ -84,8 +83,6 @@ static void __nf_nat_decode_session(struct sk_buff *skb, struct flowi *fl)
statusbit = IPS_SRC_NAT;
l3proto->decode_session(skb, ct, dir, statusbit, fl);
out:
rcu_read_unlock();
}
int nf_xfrm_me_harder(struct net *net, struct sk_buff *skb, unsigned int family)
@ -411,12 +408,6 @@ nf_nat_setup_info(struct nf_conn *ct,
enum nf_nat_manip_type maniptype)
{
struct nf_conntrack_tuple curr_tuple, new_tuple;
struct nf_conn_nat *nat;
/* nat helper or nfctnetlink also setup binding */
nat = nf_ct_nat_ext_add(ct);
if (nat == NULL)
return NF_ACCEPT;
NF_CT_ASSERT(maniptype == NF_NAT_MANIP_SRC ||
maniptype == NF_NAT_MANIP_DST);
@ -549,10 +540,6 @@ struct nf_nat_proto_clean {
static int nf_nat_proto_remove(struct nf_conn *i, void *data)
{
const struct nf_nat_proto_clean *clean = data;
struct nf_conn_nat *nat = nfct_nat(i);
if (!nat)
return 0;
if ((clean->l3proto && nf_ct_l3num(i) != clean->l3proto) ||
(clean->l4proto && nf_ct_protonum(i) != clean->l4proto))
@ -563,12 +550,10 @@ static int nf_nat_proto_remove(struct nf_conn *i, void *data)
static int nf_nat_proto_clean(struct nf_conn *ct, void *data)
{
struct nf_conn_nat *nat = nfct_nat(ct);
if (nf_nat_proto_remove(ct, data))
return 1;
if (!nat)
if ((ct->status & IPS_SRC_NAT_DONE) == 0)
return 0;
/* This netns is being destroyed, and conntrack has nat null binding.
@ -716,13 +701,9 @@ EXPORT_SYMBOL_GPL(nf_nat_l3proto_unregister);
/* No one using conntrack by the time this called. */
static void nf_nat_cleanup_conntrack(struct nf_conn *ct)
{
struct nf_conn_nat *nat = nf_ct_ext_find(ct, NF_CT_EXT_NAT);
if (!nat)
return;
rhltable_remove(&nf_nat_bysource_table, &ct->nat_bysource,
nf_nat_bysource_params);
if (ct->status & IPS_SRC_NAT_DONE)
rhltable_remove(&nf_nat_bysource_table, &ct->nat_bysource,
nf_nat_bysource_params);
}
static struct nf_ct_ext_type nat_extend __read_mostly = {
@ -730,7 +711,6 @@ static struct nf_ct_ext_type nat_extend __read_mostly = {
.align = __alignof__(struct nf_conn_nat),
.destroy = nf_nat_cleanup_conntrack,
.id = NF_CT_EXT_NAT,
.flags = NF_CT_EXT_F_PREALLOC,
};
#if IS_ENABLED(CONFIG_NF_CT_NETLINK)
@ -820,7 +800,7 @@ nfnetlink_parse_nat_setup(struct nf_conn *ct,
/* No NAT information has been passed, allocate the null-binding */
if (attr == NULL)
return __nf_nat_alloc_null_binding(ct, manip);
return __nf_nat_alloc_null_binding(ct, manip) == NF_DROP ? -ENOMEM : 0;
err = nfnetlink_parse_nat(attr, ct, &range, l3proto);
if (err < 0)
@ -875,9 +855,6 @@ static int __init nf_nat_init(void)
nf_ct_helper_expectfn_register(&follow_master_nat);
/* Initialize fake conntrack so that NAT will skip it */
nf_ct_untracked_status_or(IPS_NAT_DONE_MASK);
BUG_ON(nfnetlink_parse_nat_setup_hook != NULL);
RCU_INIT_POINTER(nfnetlink_parse_nat_setup_hook,
nfnetlink_parse_nat_setup);

View File

@ -70,15 +70,15 @@ static void mangle_contents(struct sk_buff *skb,
}
/* Unusual, but possible case. */
static int enlarge_skb(struct sk_buff *skb, unsigned int extra)
static bool enlarge_skb(struct sk_buff *skb, unsigned int extra)
{
if (skb->len + extra > 65535)
return 0;
return false;
if (pskb_expand_head(skb, 0, extra - skb_tailroom(skb), GFP_ATOMIC))
return 0;
return false;
return 1;
return true;
}
/* Generic function for mangling variable-length address changes inside
@ -89,26 +89,26 @@ static int enlarge_skb(struct sk_buff *skb, unsigned int extra)
* skb enlargement, ...
*
* */
int __nf_nat_mangle_tcp_packet(struct sk_buff *skb,
struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
unsigned int protoff,
unsigned int match_offset,
unsigned int match_len,
const char *rep_buffer,
unsigned int rep_len, bool adjust)
bool __nf_nat_mangle_tcp_packet(struct sk_buff *skb,
struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
unsigned int protoff,
unsigned int match_offset,
unsigned int match_len,
const char *rep_buffer,
unsigned int rep_len, bool adjust)
{
const struct nf_nat_l3proto *l3proto;
struct tcphdr *tcph;
int oldlen, datalen;
if (!skb_make_writable(skb, skb->len))
return 0;
return false;
if (rep_len > match_len &&
rep_len - match_len > skb_tailroom(skb) &&
!enlarge_skb(skb, rep_len - match_len))
return 0;
return false;
SKB_LINEAR_ASSERT(skb);
@ -128,7 +128,7 @@ int __nf_nat_mangle_tcp_packet(struct sk_buff *skb,
nf_ct_seqadj_set(ct, ctinfo, tcph->seq,
(int)rep_len - (int)match_len);
return 1;
return true;
}
EXPORT_SYMBOL(__nf_nat_mangle_tcp_packet);
@ -142,7 +142,7 @@ EXPORT_SYMBOL(__nf_nat_mangle_tcp_packet);
* XXX - This function could be merged with nf_nat_mangle_tcp_packet which
* should be fairly easy to do.
*/
int
bool
nf_nat_mangle_udp_packet(struct sk_buff *skb,
struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
@ -157,12 +157,12 @@ nf_nat_mangle_udp_packet(struct sk_buff *skb,
int datalen, oldlen;
if (!skb_make_writable(skb, skb->len))
return 0;
return false;
if (rep_len > match_len &&
rep_len - match_len > skb_tailroom(skb) &&
!enlarge_skb(skb, rep_len - match_len))
return 0;
return false;
udph = (void *)skb->data + protoff;
@ -176,13 +176,13 @@ nf_nat_mangle_udp_packet(struct sk_buff *skb,
/* fix udp checksum if udp checksum was previously calculated */
if (!udph->check && skb->ip_summed != CHECKSUM_PARTIAL)
return 1;
return true;
l3proto = __nf_nat_l3proto_find(nf_ct_l3num(ct));
l3proto->csum_recalc(skb, IPPROTO_UDP, udph, &udph->check,
datalen, oldlen);
return 1;
return true;
}
EXPORT_SYMBOL(nf_nat_mangle_udp_packet);

View File

@ -37,7 +37,6 @@ static unsigned int help(struct sk_buff *skb,
struct nf_conn *ct = exp->master;
union nf_inet_addr newaddr;
u_int16_t port;
unsigned int ret;
/* Reply comes from server. */
newaddr = ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u3;
@ -83,14 +82,14 @@ static unsigned int help(struct sk_buff *skb,
pr_debug("nf_nat_irc: inserting '%s' == %pI4, port %u\n",
buffer, &newaddr.ip, port);
ret = nf_nat_mangle_tcp_packet(skb, ct, ctinfo, protoff, matchoff,
matchlen, buffer, strlen(buffer));
if (ret != NF_ACCEPT) {
if (!nf_nat_mangle_tcp_packet(skb, ct, ctinfo, protoff, matchoff,
matchlen, buffer, strlen(buffer))) {
nf_ct_helper_log(skb, ct, "cannot mangle packet");
nf_ct_unexpect_related(exp);
return NF_DROP;
}
return ret;
return NF_ACCEPT;
}
static void __exit nf_nat_irc_fini(void)

View File

@ -96,15 +96,18 @@ void nf_queue_entry_get_refs(struct nf_queue_entry *entry)
}
EXPORT_SYMBOL_GPL(nf_queue_entry_get_refs);
void nf_queue_nf_hook_drop(struct net *net, const struct nf_hook_entry *entry)
unsigned int nf_queue_nf_hook_drop(struct net *net)
{
const struct nf_queue_handler *qh;
unsigned int count = 0;
rcu_read_lock();
qh = rcu_dereference(net->nf.queue_handler);
if (qh)
qh->nf_hook_drop(net, entry);
count = qh->nf_hook_drop(net);
rcu_read_unlock();
return count;
}
static int __nf_queue(struct sk_buff *skb, const struct nf_hook_state *state,

View File

@ -66,8 +66,8 @@ synproxy_parse_options(const struct sk_buff *skb, unsigned int doff,
case TCPOPT_WINDOW:
if (opsize == TCPOLEN_WINDOW) {
opts->wscale = *ptr;
if (opts->wscale > 14)
opts->wscale = 14;
if (opts->wscale > TCP_MAX_WSCALE)
opts->wscale = TCP_MAX_WSCALE;
opts->options |= XT_SYNPROXY_OPT_WSCALE;
}
break;
@ -287,9 +287,9 @@ static int synproxy_cpu_seq_show(struct seq_file *seq, void *v)
struct synproxy_stats *stats = v;
if (v == SEQ_START_TOKEN) {
seq_printf(seq, "entries\t\tsyn_received\t"
"cookie_invalid\tcookie_valid\t"
"cookie_retrans\tconn_reopened\n");
seq_puts(seq, "entries\t\tsyn_received\t"
"cookie_invalid\tcookie_valid\t"
"cookie_retrans\tconn_reopened\n");
return 0;
}

View File

@ -144,7 +144,7 @@ static int nf_tables_register_hooks(struct net *net,
unsigned int hook_nops)
{
if (table->flags & NFT_TABLE_F_DORMANT ||
!(chain->flags & NFT_BASE_CHAIN))
!nft_is_base_chain(chain))
return 0;
return nf_register_net_hooks(net, nft_base_chain(chain)->ops,
@ -157,7 +157,7 @@ static void nf_tables_unregister_hooks(struct net *net,
unsigned int hook_nops)
{
if (table->flags & NFT_TABLE_F_DORMANT ||
!(chain->flags & NFT_BASE_CHAIN))
!nft_is_base_chain(chain))
return;
nf_unregister_net_hooks(net, nft_base_chain(chain)->ops, hook_nops);
@ -438,7 +438,7 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
struct nlmsghdr *nlh;
struct nfgenmsg *nfmsg;
event |= NFNL_SUBSYS_NFTABLES << 8;
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
if (nlh == NULL)
goto nla_put_failure;
@ -587,7 +587,7 @@ static void _nf_tables_table_disable(struct net *net,
list_for_each_entry(chain, &table->chains, list) {
if (!nft_is_active_next(net, chain))
continue;
if (!(chain->flags & NFT_BASE_CHAIN))
if (!nft_is_base_chain(chain))
continue;
if (cnt && i++ == cnt)
@ -608,7 +608,7 @@ static int nf_tables_table_enable(struct net *net,
list_for_each_entry(chain, &table->chains, list) {
if (!nft_is_active_next(net, chain))
continue;
if (!(chain->flags & NFT_BASE_CHAIN))
if (!nft_is_base_chain(chain))
continue;
err = nf_register_net_hooks(net, nft_base_chain(chain)->ops,
@ -989,7 +989,7 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
struct nlmsghdr *nlh;
struct nfgenmsg *nfmsg;
event |= NFNL_SUBSYS_NFTABLES << 8;
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
if (nlh == NULL)
goto nla_put_failure;
@ -1007,7 +1007,7 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
if (nla_put_string(skb, NFTA_CHAIN_NAME, chain->name))
goto nla_put_failure;
if (chain->flags & NFT_BASE_CHAIN) {
if (nft_is_base_chain(chain)) {
const struct nft_base_chain *basechain = nft_base_chain(chain);
const struct nf_hook_ops *ops = &basechain->ops[0];
struct nlattr *nest;
@ -1227,7 +1227,7 @@ static void nf_tables_chain_destroy(struct nft_chain *chain)
{
BUG_ON(chain->use > 0);
if (chain->flags & NFT_BASE_CHAIN) {
if (nft_is_base_chain(chain)) {
struct nft_base_chain *basechain = nft_base_chain(chain);
module_put(basechain->type->owner);
@ -1365,8 +1365,8 @@ static int nf_tables_newchain(struct net *net, struct sock *nlsk,
}
if (nla[NFTA_CHAIN_POLICY]) {
if ((chain != NULL &&
!(chain->flags & NFT_BASE_CHAIN)))
if (chain != NULL &&
!nft_is_base_chain(chain))
return -EOPNOTSUPP;
if (chain == NULL &&
@ -1397,7 +1397,7 @@ static int nf_tables_newchain(struct net *net, struct sock *nlsk,
struct nft_chain_hook hook;
struct nf_hook_ops *ops;
if (!(chain->flags & NFT_BASE_CHAIN))
if (!nft_is_base_chain(chain))
return -EBUSY;
err = nft_chain_parse_hook(net, nla, afi, &hook,
@ -1434,7 +1434,7 @@ static int nf_tables_newchain(struct net *net, struct sock *nlsk,
}
if (nla[NFTA_CHAIN_COUNTERS]) {
if (!(chain->flags & NFT_BASE_CHAIN))
if (!nft_is_base_chain(chain))
return -EOPNOTSUPP;
stats = nft_stats_alloc(nla[NFTA_CHAIN_COUNTERS]);
@ -1886,10 +1886,9 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
const struct nft_expr *expr, *next;
struct nlattr *list;
const struct nft_rule *prule;
int type = event | NFNL_SUBSYS_NFTABLES << 8;
u16 type = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
nlh = nlmsg_put(skb, portid, seq, type, sizeof(struct nfgenmsg),
flags);
nlh = nlmsg_put(skb, portid, seq, type, sizeof(struct nfgenmsg), flags);
if (nlh == NULL)
goto nla_put_failure;
@ -1907,7 +1906,7 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
goto nla_put_failure;
if ((event != NFT_MSG_DELRULE) && (rule->list.prev != &chain->rules)) {
prule = list_entry(rule->list.prev, struct nft_rule, list);
prule = list_prev_entry(rule, list);
if (nla_put_be64(skb, NFTA_RULE_POSITION,
cpu_to_be64(prule->handle),
NFTA_RULE_PAD))
@ -2646,7 +2645,7 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
u32 portid = ctx->portid;
u32 seq = ctx->seq;
event |= NFNL_SUBSYS_NFTABLES << 8;
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg),
flags);
if (nlh == NULL)
@ -3398,8 +3397,7 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
if (IS_ERR(set))
return PTR_ERR(set);
event = NFT_MSG_NEWSETELEM;
event |= NFNL_SUBSYS_NFTABLES << 8;
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWSETELEM);
portid = NETLINK_CB(cb->skb).portid;
seq = cb->nlh->nlmsg_seq;
@ -3484,7 +3482,7 @@ static int nf_tables_fill_setelem_info(struct sk_buff *skb,
struct nlattr *nest;
int err;
event |= NFNL_SUBSYS_NFTABLES << 8;
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg),
flags);
if (nlh == NULL)
@ -4257,7 +4255,7 @@ static int nf_tables_fill_obj_info(struct sk_buff *skb, struct net *net,
struct nfgenmsg *nfmsg;
struct nlmsghdr *nlh;
event |= NFNL_SUBSYS_NFTABLES << 8;
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), flags);
if (nlh == NULL)
goto nla_put_failure;
@ -4439,8 +4437,6 @@ static int nf_tables_getobj(struct net *net, struct sock *nlsk,
err:
kfree_skb(skb2);
return err;
return 0;
}
static void nft_obj_destroy(struct nft_object *obj)
@ -4530,7 +4526,7 @@ static int nf_tables_fill_gen_info(struct sk_buff *skb, struct net *net,
{
struct nlmsghdr *nlh;
struct nfgenmsg *nfmsg;
int event = (NFNL_SUBSYS_NFTABLES << 8) | NFT_MSG_NEWGEN;
int event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWGEN);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(struct nfgenmsg), 0);
if (nlh == NULL)
@ -4712,7 +4708,7 @@ static void nft_chain_commit_update(struct nft_trans *trans)
if (nft_trans_chain_name(trans)[0])
strcpy(trans->ctx.chain->name, nft_trans_chain_name(trans));
if (!(trans->ctx.chain->flags & NFT_BASE_CHAIN))
if (!nft_is_base_chain(trans->ctx.chain))
return;
basechain = nft_base_chain(trans->ctx.chain);
@ -5026,7 +5022,7 @@ int nft_chain_validate_dependency(const struct nft_chain *chain,
{
const struct nft_base_chain *basechain;
if (chain->flags & NFT_BASE_CHAIN) {
if (nft_is_base_chain(chain)) {
basechain = nft_base_chain(chain);
if (basechain->type->type != type)
return -EOPNOTSUPP;
@ -5040,7 +5036,7 @@ int nft_chain_validate_hooks(const struct nft_chain *chain,
{
struct nft_base_chain *basechain;
if (chain->flags & NFT_BASE_CHAIN) {
if (nft_is_base_chain(chain)) {
basechain = nft_base_chain(chain);
if ((1 << basechain->ops[0].hooknum) & hook_flags)
@ -5350,7 +5346,7 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
tb[NFTA_VERDICT_CHAIN], genmask);
if (IS_ERR(chain))
return PTR_ERR(chain);
if (chain->flags & NFT_BASE_CHAIN)
if (nft_is_base_chain(chain))
return -EOPNOTSUPP;
chain->use++;
@ -5523,7 +5519,7 @@ int __nft_release_basechain(struct nft_ctx *ctx)
{
struct nft_rule *rule, *nr;
BUG_ON(!(ctx->chain->flags & NFT_BASE_CHAIN));
BUG_ON(!nft_is_base_chain(ctx->chain));
nf_tables_unregister_hooks(ctx->net, ctx->chain->table, ctx->chain,
ctx->afi->nops);

View File

@ -128,7 +128,7 @@ static int nf_tables_netdev_event(struct notifier_block *this,
list_for_each_entry(table, &afi->tables, list) {
ctx.table = table;
list_for_each_entry_safe(chain, nr, &table->chains, list) {
if (!(chain->flags & NFT_BASE_CHAIN))
if (!nft_is_base_chain(chain))
continue;
ctx.chain = chain;

View File

@ -169,7 +169,7 @@ void nft_trace_notify(struct nft_traceinfo *info)
struct nlmsghdr *nlh;
struct sk_buff *skb;
unsigned int size;
int event = (NFNL_SUBSYS_NFTABLES << 8) | NFT_MSG_TRACE;
u16 event;
if (!nfnetlink_has_listeners(nft_net(pkt), NFNLGRP_NFTRACE))
return;
@ -198,6 +198,7 @@ void nft_trace_notify(struct nft_traceinfo *info)
if (!skb)
return;
event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_TRACE);
nlh = nlmsg_put(skb, 0, 0, event, sizeof(struct nfgenmsg), 0);
if (!nlh)
goto nla_put_failure;

View File

@ -503,7 +503,7 @@ static void nfnetlink_rcv(struct sk_buff *skb)
if (nlh->nlmsg_type == NFNL_MSG_BATCH_BEGIN)
nfnetlink_rcv_skb_batch(skb, nlh);
else
netlink_rcv_skb(skb, &nfnetlink_rcv_msg);
netlink_rcv_skb(skb, nfnetlink_rcv_msg);
}
#ifdef CONFIG_MODULES

View File

@ -139,7 +139,7 @@ nfnl_acct_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
u64 pkts, bytes;
u32 old_flags;
event |= NFNL_SUBSYS_ACCT << 8;
event = nfnl_msg_type(NFNL_SUBSYS_ACCT, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;

View File

@ -105,7 +105,7 @@ nfnl_cthelper_from_nlattr(struct nlattr *attr, struct nf_conn *ct)
if (help->helper->data_len == 0)
return -EINVAL;
memcpy(help->data, nla_data(attr), help->helper->data_len);
nla_memcpy(help->data, nla_data(attr), sizeof(help->data));
return 0;
}
@ -152,6 +152,9 @@ nfnl_cthelper_expect_policy(struct nf_conntrack_expect_policy *expect_policy,
nla_data(tb[NFCTH_POLICY_NAME]), NF_CT_HELPER_NAME_LEN);
expect_policy->max_expected =
ntohl(nla_get_be32(tb[NFCTH_POLICY_EXPECT_MAX]));
if (expect_policy->max_expected > NF_CT_EXPECT_MAX_CNT)
return -EINVAL;
expect_policy->timeout =
ntohl(nla_get_be32(tb[NFCTH_POLICY_EXPECT_TIMEOUT]));
@ -215,6 +218,7 @@ nfnl_cthelper_create(const struct nlattr * const tb[],
{
struct nf_conntrack_helper *helper;
struct nfnl_cthelper *nfcth;
unsigned int size;
int ret;
if (!tb[NFCTH_TUPLE] || !tb[NFCTH_POLICY] || !tb[NFCTH_PRIV_DATA_LEN])
@ -230,7 +234,12 @@ nfnl_cthelper_create(const struct nlattr * const tb[],
goto err1;
strncpy(helper->name, nla_data(tb[NFCTH_NAME]), NF_CT_HELPER_NAME_LEN);
helper->data_len = ntohl(nla_get_be32(tb[NFCTH_PRIV_DATA_LEN]));
size = ntohl(nla_get_be32(tb[NFCTH_PRIV_DATA_LEN]));
if (size > FIELD_SIZEOF(struct nf_conn_help, data)) {
ret = -ENOMEM;
goto err2;
}
helper->flags |= NF_CT_HELPER_F_USERSPACE;
memcpy(&helper->tuple, tuple, sizeof(struct nf_conntrack_tuple));
@ -292,6 +301,9 @@ nfnl_cthelper_update_policy_one(const struct nf_conntrack_expect_policy *policy,
new_policy->max_expected =
ntohl(nla_get_be32(tb[NFCTH_POLICY_EXPECT_MAX]));
if (new_policy->max_expected > NF_CT_EXPECT_MAX_CNT)
return -EINVAL;
new_policy->timeout =
ntohl(nla_get_be32(tb[NFCTH_POLICY_EXPECT_TIMEOUT]));
@ -503,7 +515,7 @@ nfnl_cthelper_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
unsigned int flags = portid ? NLM_F_MULTI : 0;
int status;
event |= NFNL_SUBSYS_CTHELPER << 8;
event = nfnl_msg_type(NFNL_SUBSYS_CTHELPER, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;

View File

@ -159,7 +159,7 @@ ctnl_timeout_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
unsigned int flags = portid ? NLM_F_MULTI : 0;
struct nf_conntrack_l4proto *l4proto = timeout->l4proto;
event |= NFNL_SUBSYS_CTNETLINK_TIMEOUT << 8;
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_TIMEOUT, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;
@ -432,7 +432,7 @@ cttimeout_default_fill_info(struct net *net, struct sk_buff *skb, u32 portid,
struct nfgenmsg *nfmsg;
unsigned int flags = portid ? NLM_F_MULTI : 0;
event |= NFNL_SUBSYS_CTNETLINK_TIMEOUT << 8;
event = nfnl_msg_type(NFNL_SUBSYS_CTNETLINK_TIMEOUT, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;

View File

@ -411,7 +411,7 @@ __build_packet_message(struct nfnl_log_net *log,
const unsigned char *hwhdrp;
nlh = nlmsg_put(inst->skb, 0, 0,
NFNL_SUBSYS_ULOG << 8 | NFULNL_MSG_PACKET,
nfnl_msg_type(NFNL_SUBSYS_ULOG, NFULNL_MSG_PACKET),
sizeof(struct nfgenmsg), 0);
if (!nlh)
return -1;
@ -803,7 +803,7 @@ static int nfulnl_recv_unsupp(struct net *net, struct sock *ctnl,
static struct nf_logger nfulnl_logger __read_mostly = {
.name = "nfnetlink_log",
.type = NF_LOG_TYPE_ULOG,
.logfn = &nfulnl_log_packet,
.logfn = nfulnl_log_packet,
.me = THIS_MODULE,
};
@ -1140,10 +1140,10 @@ out:
static void __exit nfnetlink_log_fini(void)
{
nf_log_unregister(&nfulnl_logger);
nfnetlink_subsys_unregister(&nfulnl_subsys);
netlink_unregister_notifier(&nfulnl_rtnl_notifier);
unregister_pernet_subsys(&nfnl_log_net_ops);
nf_log_unregister(&nfulnl_logger);
}
MODULE_DESCRIPTION("netfilter userspace logging");

View File

@ -447,7 +447,7 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
}
nlh = nlmsg_put(skb, 0, 0,
NFNL_SUBSYS_QUEUE << 8 | NFQNL_MSG_PACKET,
nfnl_msg_type(NFNL_SUBSYS_QUEUE, NFQNL_MSG_PACKET),
sizeof(struct nfgenmsg), 0);
if (!nlh) {
skb_tx_error(entskb);
@ -922,16 +922,10 @@ static struct notifier_block nfqnl_dev_notifier = {
.notifier_call = nfqnl_rcv_dev_event,
};
static int nf_hook_cmp(struct nf_queue_entry *entry, unsigned long entry_ptr)
{
return rcu_access_pointer(entry->hook) ==
(struct nf_hook_entry *)entry_ptr;
}
static void nfqnl_nf_hook_drop(struct net *net,
const struct nf_hook_entry *hook)
static unsigned int nfqnl_nf_hook_drop(struct net *net)
{
struct nfnl_queue_net *q = nfnl_queue_pernet(net);
unsigned int instances = 0;
int i;
rcu_read_lock();
@ -939,10 +933,14 @@ static void nfqnl_nf_hook_drop(struct net *net,
struct nfqnl_instance *inst;
struct hlist_head *head = &q->instance_table[i];
hlist_for_each_entry_rcu(inst, head, hlist)
nfqnl_flush(inst, nf_hook_cmp, (unsigned long)hook);
hlist_for_each_entry_rcu(inst, head, hlist) {
nfqnl_flush(inst, NULL, 0);
instances++;
}
}
rcu_read_unlock();
return instances;
}
static int
@ -1213,8 +1211,8 @@ static const struct nla_policy nfqa_cfg_policy[NFQA_CFG_MAX+1] = {
};
static const struct nf_queue_handler nfqh = {
.outfn = &nfqnl_enqueue_packet,
.nf_hook_drop = &nfqnl_nf_hook_drop,
.outfn = nfqnl_enqueue_packet,
.nf_hook_drop = nfqnl_nf_hook_drop,
};
static int nfqnl_recv_config(struct net *net, struct sock *ctnl,

View File

@ -42,7 +42,8 @@ static int nft_compat_chain_validate_dependency(const char *tablename,
{
const struct nft_base_chain *basechain;
if (!tablename || !(chain->flags & NFT_BASE_CHAIN))
if (!tablename ||
!nft_is_base_chain(chain))
return 0;
basechain = nft_base_chain(chain);
@ -165,7 +166,7 @@ nft_target_set_tgchk_param(struct xt_tgchk_param *par,
par->entryinfo = entry;
par->target = target;
par->targinfo = info;
if (ctx->chain->flags & NFT_BASE_CHAIN) {
if (nft_is_base_chain(ctx->chain)) {
const struct nft_base_chain *basechain =
nft_base_chain(ctx->chain);
const struct nf_hook_ops *ops = &basechain->ops[0];
@ -298,7 +299,7 @@ static int nft_target_validate(const struct nft_ctx *ctx,
unsigned int hook_mask = 0;
int ret;
if (ctx->chain->flags & NFT_BASE_CHAIN) {
if (nft_is_base_chain(ctx->chain)) {
const struct nft_base_chain *basechain =
nft_base_chain(ctx->chain);
const struct nf_hook_ops *ops = &basechain->ops[0];
@ -379,7 +380,7 @@ nft_match_set_mtchk_param(struct xt_mtchk_param *par, const struct nft_ctx *ctx,
par->entryinfo = entry;
par->match = match;
par->matchinfo = info;
if (ctx->chain->flags & NFT_BASE_CHAIN) {
if (nft_is_base_chain(ctx->chain)) {
const struct nft_base_chain *basechain =
nft_base_chain(ctx->chain);
const struct nf_hook_ops *ops = &basechain->ops[0];
@ -477,7 +478,7 @@ static int nft_match_validate(const struct nft_ctx *ctx,
unsigned int hook_mask = 0;
int ret;
if (ctx->chain->flags & NFT_BASE_CHAIN) {
if (nft_is_base_chain(ctx->chain)) {
const struct nft_base_chain *basechain =
nft_base_chain(ctx->chain);
const struct nf_hook_ops *ops = &basechain->ops[0];
@ -503,7 +504,7 @@ nfnl_compat_fill_info(struct sk_buff *skb, u32 portid, u32 seq, u32 type,
struct nfgenmsg *nfmsg;
unsigned int flags = portid ? NLM_F_MULTI : 0;
event |= NFNL_SUBSYS_NFT_COMPAT << 8;
event = nfnl_msg_type(NFNL_SUBSYS_NFT_COMPAT, event);
nlh = nlmsg_put(skb, portid, seq, event, sizeof(*nfmsg), flags);
if (nlh == NULL)
goto nlmsg_failure;

View File

@ -72,12 +72,12 @@ static void nft_ct_get_eval(const struct nft_expr *expr,
switch (priv->key) {
case NFT_CT_STATE:
if (ct == NULL)
state = NF_CT_STATE_INVALID_BIT;
else if (nf_ct_is_untracked(ct))
if (ct)
state = NF_CT_STATE_BIT(ctinfo);
else if (ctinfo == IP_CT_UNTRACKED)
state = NF_CT_STATE_UNTRACKED_BIT;
else
state = NF_CT_STATE_BIT(ctinfo);
state = NF_CT_STATE_INVALID_BIT;
*dest = state;
return;
default:
@ -264,7 +264,7 @@ static void nft_ct_set_eval(const struct nft_expr *expr,
struct nf_conn *ct;
ct = nf_ct_get(skb, &ctinfo);
if (ct == NULL)
if (ct == NULL || nf_ct_is_template(ct))
return;
switch (priv->key) {
@ -283,6 +283,22 @@ static void nft_ct_set_eval(const struct nft_expr *expr,
&regs->data[priv->sreg],
NF_CT_LABELS_MAX_SIZE / sizeof(u32));
break;
#endif
#ifdef CONFIG_NF_CONNTRACK_EVENTS
case NFT_CT_EVENTMASK: {
struct nf_conntrack_ecache *e = nf_ct_ecache_find(ct);
u32 ctmask = regs->data[priv->sreg];
if (e) {
if (e->ctmask != ctmask)
e->ctmask = ctmask;
break;
}
if (ctmask && !nf_ct_is_confirmed(ct))
nf_ct_ecache_ext_add(ct, ctmask, 0, GFP_ATOMIC);
break;
}
#endif
default:
break;
@ -538,6 +554,13 @@ static int nft_ct_set_init(const struct nft_ctx *ctx,
nft_ct_pcpu_template_refcnt++;
len = sizeof(u16);
break;
#endif
#ifdef CONFIG_NF_CONNTRACK_EVENTS
case NFT_CT_EVENTMASK:
if (tb[NFTA_CT_DIRECTION])
return -EINVAL;
len = sizeof(u32);
break;
#endif
default:
return -EOPNOTSUPP;
@ -702,7 +725,7 @@ nft_ct_select_ops(const struct nft_ctx *ctx,
static struct nft_expr_type nft_ct_type __read_mostly = {
.name = "ct",
.select_ops = &nft_ct_select_ops,
.select_ops = nft_ct_select_ops,
.policy = nft_ct_policy,
.maxattr = NFTA_CT_MAX,
.owner = THIS_MODULE,
@ -718,12 +741,10 @@ static void nft_notrack_eval(const struct nft_expr *expr,
ct = nf_ct_get(pkt->skb, &ctinfo);
/* Previously seen (loopback or untracked)? Ignore. */
if (ct)
if (ct || ctinfo == IP_CT_UNTRACKED)
return;
ct = nf_ct_untracked_get();
atomic_inc(&ct->ct_general.use);
nf_ct_set(skb, ct, IP_CT_NEW);
nf_ct_set(skb, ct, IP_CT_UNTRACKED);
}
static struct nft_expr_type nft_notrack_type;

View File

@ -232,7 +232,7 @@ nft_exthdr_select_ops(const struct nft_ctx *ctx,
static struct nft_expr_type nft_exthdr_type __read_mostly = {
.name = "exthdr",
.select_ops = &nft_exthdr_select_ops,
.select_ops = nft_exthdr_select_ops,
.policy = nft_exthdr_policy,
.maxattr = NFTA_EXTHDR_MAX,
.owner = THIS_MODULE,

View File

@ -228,7 +228,7 @@ nft_hash_select_ops(const struct nft_ctx *ctx,
static struct nft_expr_type nft_hash_type __read_mostly = {
.name = "hash",
.select_ops = &nft_hash_select_ops,
.select_ops = nft_hash_select_ops,
.policy = nft_hash_policy,
.maxattr = NFTA_HASH_MAX,
.owner = THIS_MODULE,

View File

@ -467,7 +467,7 @@ nft_meta_select_ops(const struct nft_ctx *ctx,
static struct nft_expr_type nft_meta_type __read_mostly = {
.name = "meta",
.select_ops = &nft_meta_select_ops,
.select_ops = nft_meta_select_ops,
.policy = nft_meta_policy,
.maxattr = NFTA_META_MAX,
.owner = THIS_MODULE,

View File

@ -188,7 +188,7 @@ nft_ng_select_ops(const struct nft_ctx *ctx, const struct nlattr * const tb[])
static struct nft_expr_type nft_ng_type __read_mostly = {
.name = "numgen",
.select_ops = &nft_ng_select_ops,
.select_ops = nft_ng_select_ops,
.policy = nft_ng_policy,
.maxattr = NFTA_NG_MAX,
.owner = THIS_MODULE,

Some files were not shown because too many files have changed in this diff Show More