diff --git a/Documentation/networking/operstates.txt b/Documentation/networking/operstates.txt new file mode 100644 index 000000000000..4a21d9bb836b --- /dev/null +++ b/Documentation/networking/operstates.txt @@ -0,0 +1,161 @@ + +1. Introduction + +Linux distinguishes between administrative and operational state of an +interface. Admininstrative state is the result of "ip link set dev + up or down" and reflects whether the administrator wants to use +the device for traffic. + +However, an interface is not usable just because the admin enabled it +- ethernet requires to be plugged into the switch and, depending on +a site's networking policy and configuration, an 802.1X authentication +to be performed before user data can be transferred. Operational state +shows the ability of an interface to transmit this user data. + +Thanks to 802.1X, userspace must be granted the possibility to +influence operational state. To accommodate this, operational state is +split into two parts: Two flags that can be set by the driver only, and +a RFC2863 compatible state that is derived from these flags, a policy, +and changeable from userspace under certain rules. + + +2. Querying from userspace + +Both admin and operational state can be queried via the netlink +operation RTM_GETLINK. It is also possible to subscribe to RTMGRP_LINK +to be notified of updates. This is important for setting from userspace. + +These values contain interface state: + +ifinfomsg::if_flags & IFF_UP: + Interface is admin up +ifinfomsg::if_flags & IFF_RUNNING: + Interface is in RFC2863 operational state UP or UNKNOWN. This is for + backward compatibility, routing daemons, dhcp clients can use this + flag to determine whether they should use the interface. +ifinfomsg::if_flags & IFF_LOWER_UP: + Driver has signaled netif_carrier_on() +ifinfomsg::if_flags & IFF_DORMANT: + Driver has signaled netif_dormant_on() + +These interface flags can also be queried without netlink using the +SIOCGIFFLAGS ioctl. + +TLV IFLA_OPERSTATE + +contains RFC2863 state of the interface in numeric representation: + +IF_OPER_UNKNOWN (0): + Interface is in unknown state, neither driver nor userspace has set + operational state. Interface must be considered for user data as + setting operational state has not been implemented in every driver. +IF_OPER_NOTPRESENT (1): + Unused in current kernel (notpresent interfaces normally disappear), + just a numerical placeholder. +IF_OPER_DOWN (2): + Interface is unable to transfer data on L1, f.e. ethernet is not + plugged or interface is ADMIN down. +IF_OPER_LOWERLAYERDOWN (3): + Interfaces stacked on an interface that is IF_OPER_DOWN show this + state (f.e. VLAN). +IF_OPER_TESTING (4): + Unused in current kernel. +IF_OPER_DORMANT (5): + Interface is L1 up, but waiting for an external event, f.e. for a + protocol to establish. (802.1X) +IF_OPER_UP (6): + Interface is operational up and can be used. + +This TLV can also be queried via sysfs. + +TLV IFLA_LINKMODE + +contains link policy. This is needed for userspace interaction +described below. + +This TLV can also be queried via sysfs. + + +3. Kernel driver API + +Kernel drivers have access to two flags that map to IFF_LOWER_UP and +IFF_DORMANT. These flags can be set from everywhere, even from +interrupts. It is guaranteed that only the driver has write access, +however, if different layers of the driver manipulate the same flag, +the driver has to provide the synchronisation needed. + +__LINK_STATE_NOCARRIER, maps to !IFF_LOWER_UP: + +The driver uses netif_carrier_on() to clear and netif_carrier_off() to +set this flag. On netif_carrier_off(), the scheduler stops sending +packets. The name 'carrier' and the inversion are historical, think of +it as lower layer. + +netif_carrier_ok() can be used to query that bit. + +__LINK_STATE_DORMANT, maps to IFF_DORMANT: + +Set by the driver to express that the device cannot yet be used +because some driver controlled protocol establishment has to +complete. Corresponding functions are netif_dormant_on() to set the +flag, netif_dormant_off() to clear it and netif_dormant() to query. + +On device allocation, networking core sets the flags equivalent to +netif_carrier_ok() and !netif_dormant(). + + +Whenever the driver CHANGES one of these flags, a workqueue event is +scheduled to translate the flag combination to IFLA_OPERSTATE as +follows: + +!netif_carrier_ok(): + IF_OPER_LOWERLAYERDOWN if the interface is stacked, IF_OPER_DOWN + otherwise. Kernel can recognise stacked interfaces because their + ifindex != iflink. + +netif_carrier_ok() && netif_dormant(): + IF_OPER_DORMANT + +netif_carrier_ok() && !netif_dormant(): + IF_OPER_UP if userspace interaction is disabled. Otherwise + IF_OPER_DORMANT with the possibility for userspace to initiate the + IF_OPER_UP transition afterwards. + + +4. Setting from userspace + +Applications have to use the netlink interface to influence the +RFC2863 operational state of an interface. Setting IFLA_LINKMODE to 1 +via RTM_SETLINK instructs the kernel that an interface should go to +IF_OPER_DORMANT instead of IF_OPER_UP when the combination +netif_carrier_ok() && !netif_dormant() is set by the +driver. Afterwards, the userspace application can set IFLA_OPERSTATE +to IF_OPER_DORMANT or IF_OPER_UP as long as the driver does not set +netif_carrier_off() or netif_dormant_on(). Changes made by userspace +are multicasted on the netlink group RTMGRP_LINK. + +So basically a 802.1X supplicant interacts with the kernel like this: + +-subscribe to RTMGRP_LINK +-set IFLA_LINKMODE to 1 via RTM_SETLINK +-query RTM_GETLINK once to get initial state +-if initial flags are not (IFF_LOWER_UP && !IFF_DORMANT), wait until + netlink multicast signals this state +-do 802.1X, eventually abort if flags go down again +-send RTM_SETLINK to set operstate to IF_OPER_UP if authentication + succeeds, IF_OPER_DORMANT otherwise +-see how operstate and IFF_RUNNING is echoed via netlink multicast +-set interface back to IF_OPER_DORMANT if 802.1X reauthentication + fails +-restart if kernel changes IFF_LOWER_UP or IFF_DORMANT flag + +if supplicant goes down, bring back IFLA_LINKMODE to 0 and +IFLA_OPERSTATE to a sane value. + +A routing daemon or dhcp client just needs to care for IFF_RUNNING or +waiting for operstate to go IF_OPER_UP/IF_OPER_UNKNOWN before +considering the interface / querying a DHCP address. + + +For technical questions and/or comments please e-mail to Stefan Rompf +(stefan at loplof.de). diff --git a/MAINTAINERS b/MAINTAINERS index d6a8e5b434ec..5e3355871416 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1480,10 +1480,11 @@ L: netdev@vger.kernel.org S: Maintained IRDA SUBSYSTEM -P: Jean Tourrilhes +P: Samuel Ortiz +M: samuel@sortiz.org L: irda-users@lists.sourceforge.net (subscribers-only) W: http://irda.sourceforge.net/ -S: Odd Fixes +S: Maintained ISAPNP P: Jaroslav Kysela diff --git a/drivers/net/irda/Makefile b/drivers/net/irda/Makefile index 27ab75f20799..c1ce2398efea 100644 --- a/drivers/net/irda/Makefile +++ b/drivers/net/irda/Makefile @@ -46,4 +46,4 @@ obj-$(CONFIG_MA600_DONGLE) += ma600-sir.o obj-$(CONFIG_TOIM3232_DONGLE) += toim3232-sir.o # The SIR helper module -sir-dev-objs := sir_dev.o sir_dongle.o sir_kthread.o +sir-dev-objs := sir_dev.o sir_dongle.o diff --git a/drivers/net/irda/sir-dev.h b/drivers/net/irda/sir-dev.h index f69fb4cec76f..9fa294a546d6 100644 --- a/drivers/net/irda/sir-dev.h +++ b/drivers/net/irda/sir-dev.h @@ -15,23 +15,14 @@ #define IRDA_SIR_H #include +#include #include #include // iobuff_t -/* FIXME: unify irda_request with sir_fsm! */ - -struct irda_request { - struct list_head lh_request; - unsigned long pending; - void (*func)(void *); - void *data; - struct timer_list timer; -}; - struct sir_fsm { struct semaphore sem; - struct irda_request rq; + struct work_struct work; unsigned state, substate; int param; int result; diff --git a/drivers/net/irda/sir_dev.c b/drivers/net/irda/sir_dev.c index ea7c9464d46a..3b5854d10c17 100644 --- a/drivers/net/irda/sir_dev.c +++ b/drivers/net/irda/sir_dev.c @@ -23,6 +23,298 @@ #include "sir-dev.h" + +static struct workqueue_struct *irda_sir_wq; + +/* STATE MACHINE */ + +/* substate handler of the config-fsm to handle the cases where we want + * to wait for transmit completion before changing the port configuration + */ + +static int sirdev_tx_complete_fsm(struct sir_dev *dev) +{ + struct sir_fsm *fsm = &dev->fsm; + unsigned next_state, delay; + unsigned bytes_left; + + do { + next_state = fsm->substate; /* default: stay in current substate */ + delay = 0; + + switch(fsm->substate) { + + case SIRDEV_STATE_WAIT_XMIT: + if (dev->drv->chars_in_buffer) + bytes_left = dev->drv->chars_in_buffer(dev); + else + bytes_left = 0; + if (!bytes_left) { + next_state = SIRDEV_STATE_WAIT_UNTIL_SENT; + break; + } + + if (dev->speed > 115200) + delay = (bytes_left*8*10000) / (dev->speed/100); + else if (dev->speed > 0) + delay = (bytes_left*10*10000) / (dev->speed/100); + else + delay = 0; + /* expected delay (usec) until remaining bytes are sent */ + if (delay < 100) { + udelay(delay); + delay = 0; + break; + } + /* sleep some longer delay (msec) */ + delay = (delay+999) / 1000; + break; + + case SIRDEV_STATE_WAIT_UNTIL_SENT: + /* block until underlaying hardware buffer are empty */ + if (dev->drv->wait_until_sent) + dev->drv->wait_until_sent(dev); + next_state = SIRDEV_STATE_TX_DONE; + break; + + case SIRDEV_STATE_TX_DONE: + return 0; + + default: + IRDA_ERROR("%s - undefined state\n", __FUNCTION__); + return -EINVAL; + } + fsm->substate = next_state; + } while (delay == 0); + return delay; +} + +/* + * Function sirdev_config_fsm + * + * State machine to handle the configuration of the device (and attached dongle, if any). + * This handler is scheduled for execution in kIrDAd context, so we can sleep. + * however, kIrDAd is shared by all sir_dev devices so we better don't sleep there too + * long. Instead, for longer delays we start a timer to reschedule us later. + * On entry, fsm->sem is always locked and the netdev xmit queue stopped. + * Both must be unlocked/restarted on completion - but only on final exit. + */ + +static void sirdev_config_fsm(void *data) +{ + struct sir_dev *dev = data; + struct sir_fsm *fsm = &dev->fsm; + int next_state; + int ret = -1; + unsigned delay; + + IRDA_DEBUG(2, "%s(), <%ld>\n", __FUNCTION__, jiffies); + + do { + IRDA_DEBUG(3, "%s - state=0x%04x / substate=0x%04x\n", + __FUNCTION__, fsm->state, fsm->substate); + + next_state = fsm->state; + delay = 0; + + switch(fsm->state) { + + case SIRDEV_STATE_DONGLE_OPEN: + if (dev->dongle_drv != NULL) { + ret = sirdev_put_dongle(dev); + if (ret) { + fsm->result = -EINVAL; + next_state = SIRDEV_STATE_ERROR; + break; + } + } + + /* Initialize dongle */ + ret = sirdev_get_dongle(dev, fsm->param); + if (ret) { + fsm->result = ret; + next_state = SIRDEV_STATE_ERROR; + break; + } + + /* Dongles are powered through the modem control lines which + * were just set during open. Before resetting, let's wait for + * the power to stabilize. This is what some dongle drivers did + * in open before, while others didn't - should be safe anyway. + */ + + delay = 50; + fsm->substate = SIRDEV_STATE_DONGLE_RESET; + next_state = SIRDEV_STATE_DONGLE_RESET; + + fsm->param = 9600; + + break; + + case SIRDEV_STATE_DONGLE_CLOSE: + /* shouldn't we just treat this as success=? */ + if (dev->dongle_drv == NULL) { + fsm->result = -EINVAL; + next_state = SIRDEV_STATE_ERROR; + break; + } + + ret = sirdev_put_dongle(dev); + if (ret) { + fsm->result = ret; + next_state = SIRDEV_STATE_ERROR; + break; + } + next_state = SIRDEV_STATE_DONE; + break; + + case SIRDEV_STATE_SET_DTR_RTS: + ret = sirdev_set_dtr_rts(dev, + (fsm->param&0x02) ? TRUE : FALSE, + (fsm->param&0x01) ? TRUE : FALSE); + next_state = SIRDEV_STATE_DONE; + break; + + case SIRDEV_STATE_SET_SPEED: + fsm->substate = SIRDEV_STATE_WAIT_XMIT; + next_state = SIRDEV_STATE_DONGLE_CHECK; + break; + + case SIRDEV_STATE_DONGLE_CHECK: + ret = sirdev_tx_complete_fsm(dev); + if (ret < 0) { + fsm->result = ret; + next_state = SIRDEV_STATE_ERROR; + break; + } + if ((delay=ret) != 0) + break; + + if (dev->dongle_drv) { + fsm->substate = SIRDEV_STATE_DONGLE_RESET; + next_state = SIRDEV_STATE_DONGLE_RESET; + } + else { + dev->speed = fsm->param; + next_state = SIRDEV_STATE_PORT_SPEED; + } + break; + + case SIRDEV_STATE_DONGLE_RESET: + if (dev->dongle_drv->reset) { + ret = dev->dongle_drv->reset(dev); + if (ret < 0) { + fsm->result = ret; + next_state = SIRDEV_STATE_ERROR; + break; + } + } + else + ret = 0; + if ((delay=ret) == 0) { + /* set serial port according to dongle default speed */ + if (dev->drv->set_speed) + dev->drv->set_speed(dev, dev->speed); + fsm->substate = SIRDEV_STATE_DONGLE_SPEED; + next_state = SIRDEV_STATE_DONGLE_SPEED; + } + break; + + case SIRDEV_STATE_DONGLE_SPEED: + if (dev->dongle_drv->reset) { + ret = dev->dongle_drv->set_speed(dev, fsm->param); + if (ret < 0) { + fsm->result = ret; + next_state = SIRDEV_STATE_ERROR; + break; + } + } + else + ret = 0; + if ((delay=ret) == 0) + next_state = SIRDEV_STATE_PORT_SPEED; + break; + + case SIRDEV_STATE_PORT_SPEED: + /* Finally we are ready to change the serial port speed */ + if (dev->drv->set_speed) + dev->drv->set_speed(dev, dev->speed); + dev->new_speed = 0; + next_state = SIRDEV_STATE_DONE; + break; + + case SIRDEV_STATE_DONE: + /* Signal network layer so it can send more frames */ + netif_wake_queue(dev->netdev); + next_state = SIRDEV_STATE_COMPLETE; + break; + + default: + IRDA_ERROR("%s - undefined state\n", __FUNCTION__); + fsm->result = -EINVAL; + /* fall thru */ + + case SIRDEV_STATE_ERROR: + IRDA_ERROR("%s - error: %d\n", __FUNCTION__, fsm->result); + +#if 0 /* don't enable this before we have netdev->tx_timeout to recover */ + netif_stop_queue(dev->netdev); +#else + netif_wake_queue(dev->netdev); +#endif + /* fall thru */ + + case SIRDEV_STATE_COMPLETE: + /* config change finished, so we are not busy any longer */ + sirdev_enable_rx(dev); + up(&fsm->sem); + return; + } + fsm->state = next_state; + } while(!delay); + + queue_delayed_work(irda_sir_wq, &fsm->work, msecs_to_jiffies(delay)); +} + +/* schedule some device configuration task for execution by kIrDAd + * on behalf of the above state machine. + * can be called from process or interrupt/tasklet context. + */ + +int sirdev_schedule_request(struct sir_dev *dev, int initial_state, unsigned param) +{ + struct sir_fsm *fsm = &dev->fsm; + + IRDA_DEBUG(2, "%s - state=0x%04x / param=%u\n", __FUNCTION__, initial_state, param); + + if (down_trylock(&fsm->sem)) { + if (in_interrupt() || in_atomic() || irqs_disabled()) { + IRDA_DEBUG(1, "%s(), state machine busy!\n", __FUNCTION__); + return -EWOULDBLOCK; + } else + down(&fsm->sem); + } + + if (fsm->state == SIRDEV_STATE_DEAD) { + /* race with sirdev_close should never happen */ + IRDA_ERROR("%s(), instance staled!\n", __FUNCTION__); + up(&fsm->sem); + return -ESTALE; /* or better EPIPE? */ + } + + netif_stop_queue(dev->netdev); + atomic_set(&dev->enable_rx, 0); + + fsm->state = initial_state; + fsm->param = param; + fsm->result = 0; + + INIT_WORK(&fsm->work, sirdev_config_fsm, dev); + queue_work(irda_sir_wq, &fsm->work); + return 0; +} + + /***************************************************************************/ void sirdev_enable_rx(struct sir_dev *dev) @@ -619,10 +911,6 @@ struct sir_dev * sirdev_get_instance(const struct sir_driver *drv, const char *n spin_lock_init(&dev->tx_lock); init_MUTEX(&dev->fsm.sem); - INIT_LIST_HEAD(&dev->fsm.rq.lh_request); - dev->fsm.rq.pending = 0; - init_timer(&dev->fsm.rq.timer); - dev->drv = drv; dev->netdev = ndev; @@ -682,3 +970,22 @@ int sirdev_put_instance(struct sir_dev *dev) } EXPORT_SYMBOL(sirdev_put_instance); +static int __init sir_wq_init(void) +{ + irda_sir_wq = create_singlethread_workqueue("irda_sir_wq"); + if (!irda_sir_wq) + return -ENOMEM; + return 0; +} + +static void __exit sir_wq_exit(void) +{ + destroy_workqueue(irda_sir_wq); +} + +module_init(sir_wq_init); +module_exit(sir_wq_exit); + +MODULE_AUTHOR("Martin Diehl "); +MODULE_DESCRIPTION("IrDA SIR core"); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/irda/sir_kthread.c b/drivers/net/irda/sir_kthread.c deleted file mode 100644 index e3904d6bfecd..000000000000 --- a/drivers/net/irda/sir_kthread.c +++ /dev/null @@ -1,508 +0,0 @@ -/********************************************************************* - * - * sir_kthread.c: dedicated thread to process scheduled - * sir device setup requests - * - * Copyright (c) 2002 Martin Diehl - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 of - * the License, or (at your option) any later version. - * - ********************************************************************/ - -#include -#include -#include -#include -#include -#include -#include - -#include - -#include "sir-dev.h" - -/************************************************************************** - * - * kIrDAd kernel thread and config state machine - * - */ - -struct irda_request_queue { - struct list_head request_list; - spinlock_t lock; - task_t *thread; - struct completion exit; - wait_queue_head_t kick, done; - atomic_t num_pending; -}; - -static struct irda_request_queue irda_rq_queue; - -static int irda_queue_request(struct irda_request *rq) -{ - int ret = 0; - unsigned long flags; - - if (!test_and_set_bit(0, &rq->pending)) { - spin_lock_irqsave(&irda_rq_queue.lock, flags); - list_add_tail(&rq->lh_request, &irda_rq_queue.request_list); - wake_up(&irda_rq_queue.kick); - atomic_inc(&irda_rq_queue.num_pending); - spin_unlock_irqrestore(&irda_rq_queue.lock, flags); - ret = 1; - } - return ret; -} - -static void irda_request_timer(unsigned long data) -{ - struct irda_request *rq = (struct irda_request *)data; - unsigned long flags; - - spin_lock_irqsave(&irda_rq_queue.lock, flags); - list_add_tail(&rq->lh_request, &irda_rq_queue.request_list); - wake_up(&irda_rq_queue.kick); - spin_unlock_irqrestore(&irda_rq_queue.lock, flags); -} - -static int irda_queue_delayed_request(struct irda_request *rq, unsigned long delay) -{ - int ret = 0; - struct timer_list *timer = &rq->timer; - - if (!test_and_set_bit(0, &rq->pending)) { - timer->expires = jiffies + delay; - timer->function = irda_request_timer; - timer->data = (unsigned long)rq; - atomic_inc(&irda_rq_queue.num_pending); - add_timer(timer); - ret = 1; - } - return ret; -} - -static void run_irda_queue(void) -{ - unsigned long flags; - struct list_head *entry, *tmp; - struct irda_request *rq; - - spin_lock_irqsave(&irda_rq_queue.lock, flags); - list_for_each_safe(entry, tmp, &irda_rq_queue.request_list) { - rq = list_entry(entry, struct irda_request, lh_request); - list_del_init(entry); - spin_unlock_irqrestore(&irda_rq_queue.lock, flags); - - clear_bit(0, &rq->pending); - rq->func(rq->data); - - if (atomic_dec_and_test(&irda_rq_queue.num_pending)) - wake_up(&irda_rq_queue.done); - - spin_lock_irqsave(&irda_rq_queue.lock, flags); - } - spin_unlock_irqrestore(&irda_rq_queue.lock, flags); -} - -static int irda_thread(void *startup) -{ - DECLARE_WAITQUEUE(wait, current); - - daemonize("kIrDAd"); - - irda_rq_queue.thread = current; - - complete((struct completion *)startup); - - while (irda_rq_queue.thread != NULL) { - - /* We use TASK_INTERRUPTIBLE, rather than - * TASK_UNINTERRUPTIBLE. Andrew Morton made this - * change ; he told me that it is safe, because "signal - * blocking is now handled in daemonize()", he added - * that the problem is that "uninterruptible sleep - * contributes to load average", making user worry. - * Jean II */ - set_task_state(current, TASK_INTERRUPTIBLE); - add_wait_queue(&irda_rq_queue.kick, &wait); - if (list_empty(&irda_rq_queue.request_list)) - schedule(); - else - __set_task_state(current, TASK_RUNNING); - remove_wait_queue(&irda_rq_queue.kick, &wait); - - /* make swsusp happy with our thread */ - try_to_freeze(); - - run_irda_queue(); - } - -#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,35) - reparent_to_init(); -#endif - complete_and_exit(&irda_rq_queue.exit, 0); - /* never reached */ - return 0; -} - - -static void flush_irda_queue(void) -{ - if (atomic_read(&irda_rq_queue.num_pending)) { - - DECLARE_WAITQUEUE(wait, current); - - if (!list_empty(&irda_rq_queue.request_list)) - run_irda_queue(); - - set_task_state(current, TASK_UNINTERRUPTIBLE); - add_wait_queue(&irda_rq_queue.done, &wait); - if (atomic_read(&irda_rq_queue.num_pending)) - schedule(); - else - __set_task_state(current, TASK_RUNNING); - remove_wait_queue(&irda_rq_queue.done, &wait); - } -} - -/* substate handler of the config-fsm to handle the cases where we want - * to wait for transmit completion before changing the port configuration - */ - -static int irda_tx_complete_fsm(struct sir_dev *dev) -{ - struct sir_fsm *fsm = &dev->fsm; - unsigned next_state, delay; - unsigned bytes_left; - - do { - next_state = fsm->substate; /* default: stay in current substate */ - delay = 0; - - switch(fsm->substate) { - - case SIRDEV_STATE_WAIT_XMIT: - if (dev->drv->chars_in_buffer) - bytes_left = dev->drv->chars_in_buffer(dev); - else - bytes_left = 0; - if (!bytes_left) { - next_state = SIRDEV_STATE_WAIT_UNTIL_SENT; - break; - } - - if (dev->speed > 115200) - delay = (bytes_left*8*10000) / (dev->speed/100); - else if (dev->speed > 0) - delay = (bytes_left*10*10000) / (dev->speed/100); - else - delay = 0; - /* expected delay (usec) until remaining bytes are sent */ - if (delay < 100) { - udelay(delay); - delay = 0; - break; - } - /* sleep some longer delay (msec) */ - delay = (delay+999) / 1000; - break; - - case SIRDEV_STATE_WAIT_UNTIL_SENT: - /* block until underlaying hardware buffer are empty */ - if (dev->drv->wait_until_sent) - dev->drv->wait_until_sent(dev); - next_state = SIRDEV_STATE_TX_DONE; - break; - - case SIRDEV_STATE_TX_DONE: - return 0; - - default: - IRDA_ERROR("%s - undefined state\n", __FUNCTION__); - return -EINVAL; - } - fsm->substate = next_state; - } while (delay == 0); - return delay; -} - -/* - * Function irda_config_fsm - * - * State machine to handle the configuration of the device (and attached dongle, if any). - * This handler is scheduled for execution in kIrDAd context, so we can sleep. - * however, kIrDAd is shared by all sir_dev devices so we better don't sleep there too - * long. Instead, for longer delays we start a timer to reschedule us later. - * On entry, fsm->sem is always locked and the netdev xmit queue stopped. - * Both must be unlocked/restarted on completion - but only on final exit. - */ - -static void irda_config_fsm(void *data) -{ - struct sir_dev *dev = data; - struct sir_fsm *fsm = &dev->fsm; - int next_state; - int ret = -1; - unsigned delay; - - IRDA_DEBUG(2, "%s(), <%ld>\n", __FUNCTION__, jiffies); - - do { - IRDA_DEBUG(3, "%s - state=0x%04x / substate=0x%04x\n", - __FUNCTION__, fsm->state, fsm->substate); - - next_state = fsm->state; - delay = 0; - - switch(fsm->state) { - - case SIRDEV_STATE_DONGLE_OPEN: - if (dev->dongle_drv != NULL) { - ret = sirdev_put_dongle(dev); - if (ret) { - fsm->result = -EINVAL; - next_state = SIRDEV_STATE_ERROR; - break; - } - } - - /* Initialize dongle */ - ret = sirdev_get_dongle(dev, fsm->param); - if (ret) { - fsm->result = ret; - next_state = SIRDEV_STATE_ERROR; - break; - } - - /* Dongles are powered through the modem control lines which - * were just set during open. Before resetting, let's wait for - * the power to stabilize. This is what some dongle drivers did - * in open before, while others didn't - should be safe anyway. - */ - - delay = 50; - fsm->substate = SIRDEV_STATE_DONGLE_RESET; - next_state = SIRDEV_STATE_DONGLE_RESET; - - fsm->param = 9600; - - break; - - case SIRDEV_STATE_DONGLE_CLOSE: - /* shouldn't we just treat this as success=? */ - if (dev->dongle_drv == NULL) { - fsm->result = -EINVAL; - next_state = SIRDEV_STATE_ERROR; - break; - } - - ret = sirdev_put_dongle(dev); - if (ret) { - fsm->result = ret; - next_state = SIRDEV_STATE_ERROR; - break; - } - next_state = SIRDEV_STATE_DONE; - break; - - case SIRDEV_STATE_SET_DTR_RTS: - ret = sirdev_set_dtr_rts(dev, - (fsm->param&0x02) ? TRUE : FALSE, - (fsm->param&0x01) ? TRUE : FALSE); - next_state = SIRDEV_STATE_DONE; - break; - - case SIRDEV_STATE_SET_SPEED: - fsm->substate = SIRDEV_STATE_WAIT_XMIT; - next_state = SIRDEV_STATE_DONGLE_CHECK; - break; - - case SIRDEV_STATE_DONGLE_CHECK: - ret = irda_tx_complete_fsm(dev); - if (ret < 0) { - fsm->result = ret; - next_state = SIRDEV_STATE_ERROR; - break; - } - if ((delay=ret) != 0) - break; - - if (dev->dongle_drv) { - fsm->substate = SIRDEV_STATE_DONGLE_RESET; - next_state = SIRDEV_STATE_DONGLE_RESET; - } - else { - dev->speed = fsm->param; - next_state = SIRDEV_STATE_PORT_SPEED; - } - break; - - case SIRDEV_STATE_DONGLE_RESET: - if (dev->dongle_drv->reset) { - ret = dev->dongle_drv->reset(dev); - if (ret < 0) { - fsm->result = ret; - next_state = SIRDEV_STATE_ERROR; - break; - } - } - else - ret = 0; - if ((delay=ret) == 0) { - /* set serial port according to dongle default speed */ - if (dev->drv->set_speed) - dev->drv->set_speed(dev, dev->speed); - fsm->substate = SIRDEV_STATE_DONGLE_SPEED; - next_state = SIRDEV_STATE_DONGLE_SPEED; - } - break; - - case SIRDEV_STATE_DONGLE_SPEED: - if (dev->dongle_drv->reset) { - ret = dev->dongle_drv->set_speed(dev, fsm->param); - if (ret < 0) { - fsm->result = ret; - next_state = SIRDEV_STATE_ERROR; - break; - } - } - else - ret = 0; - if ((delay=ret) == 0) - next_state = SIRDEV_STATE_PORT_SPEED; - break; - - case SIRDEV_STATE_PORT_SPEED: - /* Finally we are ready to change the serial port speed */ - if (dev->drv->set_speed) - dev->drv->set_speed(dev, dev->speed); - dev->new_speed = 0; - next_state = SIRDEV_STATE_DONE; - break; - - case SIRDEV_STATE_DONE: - /* Signal network layer so it can send more frames */ - netif_wake_queue(dev->netdev); - next_state = SIRDEV_STATE_COMPLETE; - break; - - default: - IRDA_ERROR("%s - undefined state\n", __FUNCTION__); - fsm->result = -EINVAL; - /* fall thru */ - - case SIRDEV_STATE_ERROR: - IRDA_ERROR("%s - error: %d\n", __FUNCTION__, fsm->result); - -#if 0 /* don't enable this before we have netdev->tx_timeout to recover */ - netif_stop_queue(dev->netdev); -#else - netif_wake_queue(dev->netdev); -#endif - /* fall thru */ - - case SIRDEV_STATE_COMPLETE: - /* config change finished, so we are not busy any longer */ - sirdev_enable_rx(dev); - up(&fsm->sem); - return; - } - fsm->state = next_state; - } while(!delay); - - irda_queue_delayed_request(&fsm->rq, msecs_to_jiffies(delay)); -} - -/* schedule some device configuration task for execution by kIrDAd - * on behalf of the above state machine. - * can be called from process or interrupt/tasklet context. - */ - -int sirdev_schedule_request(struct sir_dev *dev, int initial_state, unsigned param) -{ - struct sir_fsm *fsm = &dev->fsm; - int xmit_was_down; - - IRDA_DEBUG(2, "%s - state=0x%04x / param=%u\n", __FUNCTION__, initial_state, param); - - if (down_trylock(&fsm->sem)) { - if (in_interrupt() || in_atomic() || irqs_disabled()) { - IRDA_DEBUG(1, "%s(), state machine busy!\n", __FUNCTION__); - return -EWOULDBLOCK; - } else - down(&fsm->sem); - } - - if (fsm->state == SIRDEV_STATE_DEAD) { - /* race with sirdev_close should never happen */ - IRDA_ERROR("%s(), instance staled!\n", __FUNCTION__); - up(&fsm->sem); - return -ESTALE; /* or better EPIPE? */ - } - - xmit_was_down = netif_queue_stopped(dev->netdev); - netif_stop_queue(dev->netdev); - atomic_set(&dev->enable_rx, 0); - - fsm->state = initial_state; - fsm->param = param; - fsm->result = 0; - - INIT_LIST_HEAD(&fsm->rq.lh_request); - fsm->rq.pending = 0; - fsm->rq.func = irda_config_fsm; - fsm->rq.data = dev; - - if (!irda_queue_request(&fsm->rq)) { /* returns 0 on error! */ - atomic_set(&dev->enable_rx, 1); - if (!xmit_was_down) - netif_wake_queue(dev->netdev); - up(&fsm->sem); - return -EAGAIN; - } - return 0; -} - -static int __init irda_thread_create(void) -{ - struct completion startup; - int pid; - - spin_lock_init(&irda_rq_queue.lock); - irda_rq_queue.thread = NULL; - INIT_LIST_HEAD(&irda_rq_queue.request_list); - init_waitqueue_head(&irda_rq_queue.kick); - init_waitqueue_head(&irda_rq_queue.done); - atomic_set(&irda_rq_queue.num_pending, 0); - - init_completion(&startup); - pid = kernel_thread(irda_thread, &startup, CLONE_FS|CLONE_FILES); - if (pid <= 0) - return -EAGAIN; - else - wait_for_completion(&startup); - - return 0; -} - -static void __exit irda_thread_join(void) -{ - if (irda_rq_queue.thread) { - flush_irda_queue(); - init_completion(&irda_rq_queue.exit); - irda_rq_queue.thread = NULL; - wake_up(&irda_rq_queue.kick); - wait_for_completion(&irda_rq_queue.exit); - } -} - -module_init(irda_thread_create); -module_exit(irda_thread_join); - -MODULE_AUTHOR("Martin Diehl "); -MODULE_DESCRIPTION("IrDA SIR core"); -MODULE_LICENSE("GPL"); - diff --git a/drivers/net/irda/smsc-ircc2.c b/drivers/net/irda/smsc-ircc2.c index 58f76cefbc83..a4674044bd6f 100644 --- a/drivers/net/irda/smsc-ircc2.c +++ b/drivers/net/irda/smsc-ircc2.c @@ -54,6 +54,7 @@ #include #include #include +#include #include #include @@ -358,6 +359,16 @@ static inline void register_bank(int iobase, int bank) iobase + IRCC_MASTER); } +#ifdef CONFIG_PNP +/* PNP hotplug support */ +static const struct pnp_device_id smsc_ircc_pnp_table[] = { + { .id = "SMCf010", .driver_data = 0 }, + /* and presumably others */ + { } +}; +MODULE_DEVICE_TABLE(pnp, smsc_ircc_pnp_table); +#endif + /******************************************************************************* * @@ -2072,7 +2083,8 @@ static void smsc_ircc_sir_wait_hw_transmitter_finish(struct smsc_ircc_cb *self) /* PROBING * - * + * REVISIT we can be told about the device by PNP, and should use that info + * instead of probing hardware and creating a platform_device ... */ static int __init smsc_ircc_look_for_chips(void) diff --git a/drivers/net/tg3.c b/drivers/net/tg3.c index beeb612be98f..2bd9592b75cd 100644 --- a/drivers/net/tg3.c +++ b/drivers/net/tg3.c @@ -8454,6 +8454,9 @@ static int tg3_run_loopback(struct tg3 *tp, int loopback_mode) tx_len = 1514; skb = dev_alloc_skb(tx_len); + if (!skb) + return -ENOMEM; + tx_data = skb_put(skb, tx_len); memcpy(tx_data, tp->dev->dev_addr, 6); memset(tx_data + 6, 0x0, 8); diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index a461b51d6076..f4169bbb60eb 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -433,8 +433,7 @@ struct net_device /* register/unregister state machine */ enum { NETREG_UNINITIALIZED=0, - NETREG_REGISTERING, /* called register_netdevice */ - NETREG_REGISTERED, /* completed register todo */ + NETREG_REGISTERED, /* completed register_netdevice */ NETREG_UNREGISTERING, /* called unregister_netdevice */ NETREG_UNREGISTERED, /* completed unregister todo */ NETREG_RELEASED, /* called free_netdev */ diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c index 59eef42d4a42..ad1c7af65ec8 100644 --- a/net/bridge/br_if.c +++ b/net/bridge/br_if.c @@ -308,26 +308,19 @@ int br_add_bridge(const char *name) if (ret) goto err2; - /* network device kobject is not setup until - * after rtnl_unlock does it's hotplug magic. - * so hold reference to avoid race. - */ - dev_hold(dev); - rtnl_unlock(); - ret = br_sysfs_addbr(dev); - dev_put(dev); - - if (ret) - unregister_netdev(dev); - out: - return ret; + if (ret) + goto err3; + rtnl_unlock(); + return 0; + err3: + unregister_netdev(dev); err2: free_netdev(dev); err1: rtnl_unlock(); - goto out; + return ret; } int br_del_bridge(const char *name) diff --git a/net/core/dev.c b/net/core/dev.c index 9ab3cfa58466..2dce673a039b 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -193,7 +193,7 @@ static inline struct hlist_head *dev_index_hash(int ifindex) * Our notifier list */ -static BLOCKING_NOTIFIER_HEAD(netdev_chain); +static RAW_NOTIFIER_HEAD(netdev_chain); /* * Device drivers call our routines to queue packets here. We empty the @@ -736,7 +736,7 @@ int dev_change_name(struct net_device *dev, char *newname) if (!err) { hlist_del(&dev->name_hlist); hlist_add_head(&dev->name_hlist, dev_name_hash(dev->name)); - blocking_notifier_call_chain(&netdev_chain, + raw_notifier_call_chain(&netdev_chain, NETDEV_CHANGENAME, dev); } @@ -751,7 +751,7 @@ int dev_change_name(struct net_device *dev, char *newname) */ void netdev_features_change(struct net_device *dev) { - blocking_notifier_call_chain(&netdev_chain, NETDEV_FEAT_CHANGE, dev); + raw_notifier_call_chain(&netdev_chain, NETDEV_FEAT_CHANGE, dev); } EXPORT_SYMBOL(netdev_features_change); @@ -766,7 +766,7 @@ EXPORT_SYMBOL(netdev_features_change); void netdev_state_change(struct net_device *dev) { if (dev->flags & IFF_UP) { - blocking_notifier_call_chain(&netdev_chain, + raw_notifier_call_chain(&netdev_chain, NETDEV_CHANGE, dev); rtmsg_ifinfo(RTM_NEWLINK, dev, 0); } @@ -864,7 +864,7 @@ int dev_open(struct net_device *dev) /* * ... and announce new interface. */ - blocking_notifier_call_chain(&netdev_chain, NETDEV_UP, dev); + raw_notifier_call_chain(&netdev_chain, NETDEV_UP, dev); } return ret; } @@ -887,7 +887,7 @@ int dev_close(struct net_device *dev) * Tell people we are going down, so that they can * prepare to death, when device is still operating. */ - blocking_notifier_call_chain(&netdev_chain, NETDEV_GOING_DOWN, dev); + raw_notifier_call_chain(&netdev_chain, NETDEV_GOING_DOWN, dev); dev_deactivate(dev); @@ -924,7 +924,7 @@ int dev_close(struct net_device *dev) /* * Tell people we are down */ - blocking_notifier_call_chain(&netdev_chain, NETDEV_DOWN, dev); + raw_notifier_call_chain(&netdev_chain, NETDEV_DOWN, dev); return 0; } @@ -955,7 +955,7 @@ int register_netdevice_notifier(struct notifier_block *nb) int err; rtnl_lock(); - err = blocking_notifier_chain_register(&netdev_chain, nb); + err = raw_notifier_chain_register(&netdev_chain, nb); if (!err) { for (dev = dev_base; dev; dev = dev->next) { nb->notifier_call(nb, NETDEV_REGISTER, dev); @@ -983,7 +983,7 @@ int unregister_netdevice_notifier(struct notifier_block *nb) int err; rtnl_lock(); - err = blocking_notifier_chain_unregister(&netdev_chain, nb); + err = raw_notifier_chain_unregister(&netdev_chain, nb); rtnl_unlock(); return err; } @@ -994,12 +994,12 @@ int unregister_netdevice_notifier(struct notifier_block *nb) * @v: pointer passed unmodified to notifier function * * Call all network notifier blocks. Parameters and return value - * are as for blocking_notifier_call_chain(). + * are as for raw_notifier_call_chain(). */ int call_netdevice_notifiers(unsigned long val, void *v) { - return blocking_notifier_call_chain(&netdev_chain, val, v); + return raw_notifier_call_chain(&netdev_chain, val, v); } /* When > 0 there are consumers of rx skb time stamps */ @@ -2308,7 +2308,7 @@ int dev_change_flags(struct net_device *dev, unsigned flags) if (dev->flags & IFF_UP && ((old_flags ^ dev->flags) &~ (IFF_UP | IFF_PROMISC | IFF_ALLMULTI | IFF_VOLATILE))) - blocking_notifier_call_chain(&netdev_chain, + raw_notifier_call_chain(&netdev_chain, NETDEV_CHANGE, dev); if ((flags ^ dev->gflags) & IFF_PROMISC) { @@ -2353,7 +2353,7 @@ int dev_set_mtu(struct net_device *dev, int new_mtu) else dev->mtu = new_mtu; if (!err && dev->flags & IFF_UP) - blocking_notifier_call_chain(&netdev_chain, + raw_notifier_call_chain(&netdev_chain, NETDEV_CHANGEMTU, dev); return err; } @@ -2370,7 +2370,7 @@ int dev_set_mac_address(struct net_device *dev, struct sockaddr *sa) return -ENODEV; err = dev->set_mac_address(dev, sa); if (!err) - blocking_notifier_call_chain(&netdev_chain, + raw_notifier_call_chain(&netdev_chain, NETDEV_CHANGEADDR, dev); return err; } @@ -2427,7 +2427,7 @@ static int dev_ifsioc(struct ifreq *ifr, unsigned int cmd) return -EINVAL; memcpy(dev->broadcast, ifr->ifr_hwaddr.sa_data, min(sizeof ifr->ifr_hwaddr.sa_data, (size_t) dev->addr_len)); - blocking_notifier_call_chain(&netdev_chain, + raw_notifier_call_chain(&netdev_chain, NETDEV_CHANGEADDR, dev); return 0; @@ -2777,6 +2777,8 @@ int register_netdevice(struct net_device *dev) BUG_ON(dev_boot_phase); ASSERT_RTNL(); + might_sleep(); + /* When net_device's are persistent, this will be fatal. */ BUG_ON(dev->reg_state != NETREG_UNINITIALIZED); @@ -2863,6 +2865,11 @@ int register_netdevice(struct net_device *dev) if (!dev->rebuild_header) dev->rebuild_header = default_rebuild_header; + ret = netdev_register_sysfs(dev); + if (ret) + goto out_err; + dev->reg_state = NETREG_REGISTERED; + /* * Default initial state at registry is that the * device is present. @@ -2878,14 +2885,11 @@ int register_netdevice(struct net_device *dev) hlist_add_head(&dev->name_hlist, head); hlist_add_head(&dev->index_hlist, dev_index_hash(dev->ifindex)); dev_hold(dev); - dev->reg_state = NETREG_REGISTERING; write_unlock_bh(&dev_base_lock); /* Notify protocols, that a new device appeared. */ - blocking_notifier_call_chain(&netdev_chain, NETDEV_REGISTER, dev); + raw_notifier_call_chain(&netdev_chain, NETDEV_REGISTER, dev); - /* Finish registration after unlock */ - net_set_todo(dev); ret = 0; out: @@ -2961,7 +2965,7 @@ static void netdev_wait_allrefs(struct net_device *dev) rtnl_lock(); /* Rebroadcast unregister notification */ - blocking_notifier_call_chain(&netdev_chain, + raw_notifier_call_chain(&netdev_chain, NETDEV_UNREGISTER, dev); if (test_bit(__LINK_STATE_LINKWATCH_PENDING, @@ -3008,7 +3012,7 @@ static void netdev_wait_allrefs(struct net_device *dev) * * We are invoked by rtnl_unlock() after it drops the semaphore. * This allows us to deal with problems: - * 1) We can create/delete sysfs objects which invoke hotplug + * 1) We can delete sysfs objects which invoke hotplug * without deadlocking with linkwatch via keventd. * 2) Since we run with the RTNL semaphore not held, we can sleep * safely in order to wait for the netdev refcnt to drop to zero. @@ -3017,8 +3021,6 @@ static DEFINE_MUTEX(net_todo_run_mutex); void netdev_run_todo(void) { struct list_head list = LIST_HEAD_INIT(list); - int err; - /* Need to guard against multiple cpu's getting out of order. */ mutex_lock(&net_todo_run_mutex); @@ -3041,40 +3043,29 @@ void netdev_run_todo(void) = list_entry(list.next, struct net_device, todo_list); list_del(&dev->todo_list); - switch(dev->reg_state) { - case NETREG_REGISTERING: - err = netdev_register_sysfs(dev); - if (err) - printk(KERN_ERR "%s: failed sysfs registration (%d)\n", - dev->name, err); - dev->reg_state = NETREG_REGISTERED; - break; - - case NETREG_UNREGISTERING: - netdev_unregister_sysfs(dev); - dev->reg_state = NETREG_UNREGISTERED; - - netdev_wait_allrefs(dev); - - /* paranoia */ - BUG_ON(atomic_read(&dev->refcnt)); - BUG_TRAP(!dev->ip_ptr); - BUG_TRAP(!dev->ip6_ptr); - BUG_TRAP(!dev->dn_ptr); - - - /* It must be the very last action, - * after this 'dev' may point to freed up memory. - */ - if (dev->destructor) - dev->destructor(dev); - break; - - default: + if (unlikely(dev->reg_state != NETREG_UNREGISTERING)) { printk(KERN_ERR "network todo '%s' but state %d\n", dev->name, dev->reg_state); - break; + dump_stack(); + continue; } + + netdev_unregister_sysfs(dev); + dev->reg_state = NETREG_UNREGISTERED; + + netdev_wait_allrefs(dev); + + /* paranoia */ + BUG_ON(atomic_read(&dev->refcnt)); + BUG_TRAP(!dev->ip_ptr); + BUG_TRAP(!dev->ip6_ptr); + BUG_TRAP(!dev->dn_ptr); + + /* It must be the very last action, + * after this 'dev' may point to freed up memory. + */ + if (dev->destructor) + dev->destructor(dev); } out: @@ -3216,7 +3207,7 @@ int unregister_netdevice(struct net_device *dev) /* Notify protocols, that we are about to destroy this device. They should clean all the things. */ - blocking_notifier_call_chain(&netdev_chain, NETDEV_UNREGISTER, dev); + raw_notifier_call_chain(&netdev_chain, NETDEV_UNREGISTER, dev); /* * Flush the multicast chain diff --git a/net/core/link_watch.c b/net/core/link_watch.c index 341de44c7ed1..646937cc2d84 100644 --- a/net/core/link_watch.c +++ b/net/core/link_watch.c @@ -170,13 +170,13 @@ void linkwatch_fire_event(struct net_device *dev) spin_unlock_irqrestore(&lweventlist_lock, flags); if (!test_and_set_bit(LW_RUNNING, &linkwatch_flags)) { - unsigned long thisevent = jiffies; + unsigned long delay = linkwatch_nextevent - jiffies; - if (thisevent >= linkwatch_nextevent) { + /* If we wrap around we'll delay it by at most HZ. */ + if (!delay || delay > HZ) schedule_work(&linkwatch_work); - } else { - schedule_delayed_work(&linkwatch_work, linkwatch_nextevent - thisevent); - } + else + schedule_delayed_work(&linkwatch_work, delay); } } } diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c index 9bebad07bf2e..cbcae6544622 100644 --- a/net/ipv4/ip_options.c +++ b/net/ipv4/ip_options.c @@ -209,7 +209,7 @@ int ip_options_echo(struct ip_options * dopt, struct sk_buff * skb) void ip_options_fragment(struct sk_buff * skb) { - unsigned char * optptr = skb->nh.raw; + unsigned char * optptr = skb->nh.raw + sizeof(struct iphdr); struct ip_options * opt = &(IPCB(skb)->opt); int l = opt->optlen; int optlen; diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c index f8f3a37a1494..eb2865d5ae28 100644 --- a/net/ipv6/inet6_connection_sock.c +++ b/net/ipv6/inet6_connection_sock.c @@ -173,6 +173,7 @@ int inet6_csk_xmit(struct sk_buff *skb, int ipfragok) if (err) { sk->sk_err_soft = -err; + kfree_skb(skb); return err; } @@ -181,6 +182,7 @@ int inet6_csk_xmit(struct sk_buff *skb, int ipfragok) if ((err = xfrm_lookup(&dst, &fl, sk, 0)) < 0) { sk->sk_route_caps = 0; + kfree_skb(skb); return err; } diff --git a/net/irda/irias_object.c b/net/irda/irias_object.c index c6d169fbdceb..82e665c79991 100644 --- a/net/irda/irias_object.c +++ b/net/irda/irias_object.c @@ -257,7 +257,6 @@ struct ias_attrib *irias_find_attrib(struct ias_object *obj, char *name) /* Unsafe (locking), attrib might change */ return attrib; } -EXPORT_SYMBOL(irias_find_attrib); /* * Function irias_add_attribute (obj, attrib) @@ -484,7 +483,6 @@ struct ias_value *irias_new_string_value(char *string) return value; } -EXPORT_SYMBOL(irias_new_string_value); /* * Function irias_new_octseq_value (octets, len) @@ -519,7 +517,6 @@ struct ias_value *irias_new_octseq_value(__u8 *octseq , int len) memcpy(value->t.oct_seq, octseq , len); return value; } -EXPORT_SYMBOL(irias_new_octseq_value); struct ias_value *irias_new_missing_value(void) { diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c index 91132f6871d7..f1c7bd29f2cd 100644 --- a/net/sched/sch_hfsc.c +++ b/net/sched/sch_hfsc.c @@ -974,10 +974,10 @@ hfsc_adjust_levels(struct hfsc_class *cl) do { level = 0; list_for_each_entry(p, &cl->children, siblings) { - if (p->level > level) - level = p->level; + if (p->level >= level) + level = p->level + 1; } - cl->level = level + 1; + cl->level = level; } while ((cl = cl->cl_parent) != NULL); }