1
0
Fork 0

Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6

* master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: (27 commits)
  [Bluetooth] Add RFCOMM role switch support
  [Bluetooth] Allow disabling of credit based flow control
  [Bluetooth] Small cleanup of the L2CAP source code
  [Bluetooth] Use real devices for host controllers
  [Bluetooth] Add platform device for virtual and serial devices
  [Bluetooth] Add automatic sniff mode support
  [Bluetooth] Correct SCO buffer size on request
  [Bluetooth] Add suspend/resume support to the HCI USB driver
  [Bluetooth] Use raw mode for the Frontline sniffer device
  [BRIDGE]: br_dump_ifinfo index fix
  [ATM]: add+use poison defines
  [NET]: add+use poison defines
  [IOAT]: fix kernel-doc in source files
  [IOAT]: fix header file kernel-doc
  [TG3]: Add ipv6 TSO feature
  [IPV6]: Fix ipv6 GSO payload length
  [TIPC] Fixed sk_buff panic caused by tipc_link_bundle_buf (REVISED)
  [NET]: Verify gso_type too in gso_segment
  [IPVS]: Add sysctl documentation
  [ROSE]: Try all routes when establishing a ROSE connections.
  ...
hifive-unleashed-5.1
Linus Torvalds 2006-07-03 21:28:14 -07:00
commit 67ab33db8b
44 changed files with 1173 additions and 478 deletions

View File

@ -0,0 +1,143 @@
/proc/sys/net/ipv4/vs/* Variables:
am_droprate - INTEGER
default 10
It sets the always mode drop rate, which is used in the mode 3
of the drop_rate defense.
amemthresh - INTEGER
default 1024
It sets the available memory threshold (in pages), which is
used in the automatic modes of defense. When there is no
enough available memory, the respective strategy will be
enabled and the variable is automatically set to 2, otherwise
the strategy is disabled and the variable is set to 1.
cache_bypass - BOOLEAN
0 - disabled (default)
not 0 - enabled
If it is enabled, forward packets to the original destination
directly when no cache server is available and destination
address is not local (iph->daddr is RTN_UNICAST). It is mostly
used in transparent web cache cluster.
debug_level - INTEGER
0 - transmission error messages (default)
1 - non-fatal error messages
2 - configuration
3 - destination trash
4 - drop entry
5 - service lookup
6 - scheduling
7 - connection new/expire, lookup and synchronization
8 - state transition
9 - binding destination, template checks and applications
10 - IPVS packet transmission
11 - IPVS packet handling (ip_vs_in/ip_vs_out)
12 or more - packet traversal
Only available when IPVS is compiled with the CONFIG_IPVS_DEBUG
Higher debugging levels include the messages for lower debugging
levels, so setting debug level 2, includes level 0, 1 and 2
messages. Thus, logging becomes more and more verbose the higher
the level.
drop_entry - INTEGER
0 - disabled (default)
The drop_entry defense is to randomly drop entries in the
connection hash table, just in order to collect back some
memory for new connections. In the current code, the
drop_entry procedure can be activated every second, then it
randomly scans 1/32 of the whole and drops entries that are in
the SYN-RECV/SYNACK state, which should be effective against
syn-flooding attack.
The valid values of drop_entry are from 0 to 3, where 0 means
that this strategy is always disabled, 1 and 2 mean automatic
modes (when there is no enough available memory, the strategy
is enabled and the variable is automatically set to 2,
otherwise the strategy is disabled and the variable is set to
1), and 3 means that that the strategy is always enabled.
drop_packet - INTEGER
0 - disabled (default)
The drop_packet defense is designed to drop 1/rate packets
before forwarding them to real servers. If the rate is 1, then
drop all the incoming packets.
The value definition is the same as that of the drop_entry. In
the automatic mode, the rate is determined by the follow
formula: rate = amemthresh / (amemthresh - available_memory)
when available memory is less than the available memory
threshold. When the mode 3 is set, the always mode drop rate
is controlled by the /proc/sys/net/ipv4/vs/am_droprate.
expire_nodest_conn - BOOLEAN
0 - disabled (default)
not 0 - enabled
The default value is 0, the load balancer will silently drop
packets when its destination server is not available. It may
be useful, when user-space monitoring program deletes the
destination server (because of server overload or wrong
detection) and add back the server later, and the connections
to the server can continue.
If this feature is enabled, the load balancer will expire the
connection immediately when a packet arrives and its
destination server is not available, then the client program
will be notified that the connection is closed. This is
equivalent to the feature some people requires to flush
connections when its destination is not available.
expire_quiescent_template - BOOLEAN
0 - disabled (default)
not 0 - enabled
When set to a non-zero value, the load balancer will expire
persistent templates when the destination server is quiescent.
This may be useful, when a user makes a destination server
quiescent by setting its weight to 0 and it is desired that
subsequent otherwise persistent connections are sent to a
different destination server. By default new persistent
connections are allowed to quiescent destination servers.
If this feature is enabled, the load balancer will expire the
persistence template if it is to be used to schedule a new
connection and the destination server is quiescent.
nat_icmp_send - BOOLEAN
0 - disabled (default)
not 0 - enabled
It controls sending icmp error messages (ICMP_DEST_UNREACH)
for VS/NAT when the load balancer receives packets from real
servers but the connection entries don't exist.
secure_tcp - INTEGER
0 - disabled (default)
The secure_tcp defense is to use a more complicated state
transition table and some possible short timeouts of each
state. In the VS/NAT, it delays the entering the ESTABLISHED
until the real server starts to send data and ACK packet
(after 3-way handshake).
The value definition is the same as that of drop_entry or
drop_packet.
sync_threshold - INTEGER
default 3
It sets synchronization threshold, which is the minimum number
of incoming packets that a connection needs to receive before
the connection will be synchronized. A connection will be
synchronized, every time the number of its incoming packets
modulus 50 equals the threshold. The range of the threshold is
from 0 to 49.

View File

@ -31,6 +31,7 @@
#include <linux/atmdev.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/poison.h>
#include <asm/atomic.h>
#include <asm/io.h>
@ -1995,7 +1996,7 @@ static int __devinit ucode_init (loader_block * lb, amb_dev * dev) {
}
i += 1;
}
if (*pointer == 0xdeadbeef) {
if (*pointer == ATM_POISON) {
return loader_start (lb, dev, ucode_start);
} else {
// cast needed as there is no %? for pointer differnces

View File

@ -35,6 +35,7 @@ static char const rcsid[] =
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/poison.h>
#include <linux/skbuff.h>
#include <linux/kernel.h>
#include <linux/vmalloc.h>
@ -3657,7 +3658,7 @@ probe_sram(struct idt77252_dev *card)
writel(SAR_CMD_WRITE_SRAM | (0 << 2), SAR_REG_CMD);
for (addr = 0x4000; addr < 0x80000; addr += 0x4000) {
writel(0xdeadbeef, SAR_REG_DR0);
writel(ATM_POISON, SAR_REG_DR0);
writel(SAR_CMD_WRITE_SRAM | (addr << 2), SAR_REG_CMD);
writel(SAR_CMD_READ_SRAM | (0 << 2), SAR_REG_CMD);

View File

@ -739,6 +739,7 @@ static int bluecard_open(bluecard_info_t *info)
hdev->type = HCI_PCCARD;
hdev->driver_data = info;
SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
hdev->open = bluecard_hci_open;
hdev->close = bluecard_hci_close;

View File

@ -582,6 +582,7 @@ static int bt3c_open(bt3c_info_t *info)
hdev->type = HCI_PCCARD;
hdev->driver_data = info;
SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
hdev->open = bt3c_hci_open;
hdev->close = bt3c_hci_close;

View File

@ -502,6 +502,7 @@ static int btuart_open(btuart_info_t *info)
hdev->type = HCI_PCCARD;
hdev->driver_data = info;
SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
hdev->open = btuart_hci_open;
hdev->close = btuart_hci_close;

View File

@ -484,6 +484,7 @@ static int dtl1_open(dtl1_info_t *info)
hdev->type = HCI_PCCARD;
hdev->driver_data = info;
SET_HCIDEV_DEV(hdev, &info->p_dev->dev);
hdev->open = dtl1_hci_open;
hdev->close = dtl1_hci_close;

View File

@ -122,6 +122,9 @@ static struct usb_device_id blacklist_ids[] = {
/* RTX Telecom based adapter with buggy SCO support */
{ USB_DEVICE(0x0400, 0x0807), .driver_info = HCI_BROKEN_ISOC },
/* Belkin F8T012 */
{ USB_DEVICE(0x050d, 0x0012), .driver_info = HCI_WRONG_SCO_MTU },
/* Digianswer devices */
{ USB_DEVICE(0x08fd, 0x0001), .driver_info = HCI_DIGIANSWER },
{ USB_DEVICE(0x08fd, 0x0002), .driver_info = HCI_IGNORE },
@ -129,6 +132,9 @@ static struct usb_device_id blacklist_ids[] = {
/* CSR BlueCore Bluetooth Sniffer */
{ USB_DEVICE(0x0a12, 0x0002), .driver_info = HCI_SNIFFER },
/* Frontline ComProbe Bluetooth Sniffer */
{ USB_DEVICE(0x16d3, 0x0002), .driver_info = HCI_SNIFFER },
{ } /* Terminating entry */
};
@ -984,6 +990,9 @@ static int hci_usb_probe(struct usb_interface *intf, const struct usb_device_id
if (reset || id->driver_info & HCI_RESET)
set_bit(HCI_QUIRK_RESET_ON_INIT, &hdev->quirks);
if (id->driver_info & HCI_WRONG_SCO_MTU)
set_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks);
if (id->driver_info & HCI_SNIFFER) {
if (le16_to_cpu(udev->descriptor.bcdDevice) > 0x997)
set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
@ -1042,10 +1051,81 @@ static void hci_usb_disconnect(struct usb_interface *intf)
hci_free_dev(hdev);
}
static int hci_usb_suspend(struct usb_interface *intf, pm_message_t message)
{
struct hci_usb *husb = usb_get_intfdata(intf);
struct list_head killed;
unsigned long flags;
int i;
if (!husb || intf == husb->isoc_iface)
return 0;
hci_suspend_dev(husb->hdev);
INIT_LIST_HEAD(&killed);
for (i = 0; i < 4; i++) {
struct _urb_queue *q = &husb->pending_q[i];
struct _urb *_urb, *_tmp;
while ((_urb = _urb_dequeue(q))) {
/* reset queue since _urb_dequeue sets it to NULL */
_urb->queue = q;
usb_kill_urb(&_urb->urb);
list_add(&_urb->list, &killed);
}
spin_lock_irqsave(&q->lock, flags);
list_for_each_entry_safe(_urb, _tmp, &killed, list) {
list_move_tail(&_urb->list, &q->head);
}
spin_unlock_irqrestore(&q->lock, flags);
}
return 0;
}
static int hci_usb_resume(struct usb_interface *intf)
{
struct hci_usb *husb = usb_get_intfdata(intf);
unsigned long flags;
int i, err = 0;
if (!husb || intf == husb->isoc_iface)
return 0;
for (i = 0; i < 4; i++) {
struct _urb_queue *q = &husb->pending_q[i];
struct _urb *_urb;
spin_lock_irqsave(&q->lock, flags);
list_for_each_entry(_urb, &q->head, list) {
err = usb_submit_urb(&_urb->urb, GFP_ATOMIC);
if (err)
break;
}
spin_unlock_irqrestore(&q->lock, flags);
if (err)
return -EIO;
}
hci_resume_dev(husb->hdev);
return 0;
}
static struct usb_driver hci_usb_driver = {
.name = "hci_usb",
.probe = hci_usb_probe,
.disconnect = hci_usb_disconnect,
.suspend = hci_usb_suspend,
.resume = hci_usb_resume,
.id_table = bluetooth_ids,
};

View File

@ -35,6 +35,7 @@
#define HCI_SNIFFER 0x10
#define HCI_BCM92035 0x20
#define HCI_BROKEN_ISOC 0x40
#define HCI_WRONG_SCO_MTU 0x80
#define HCI_MAX_IFACE_NUM 3

View File

@ -277,7 +277,6 @@ static int vhci_open(struct inode *inode, struct file *file)
hdev->type = HCI_VHCI;
hdev->driver_data = vhci;
SET_HCIDEV_DEV(hdev, vhci_miscdev.dev);
hdev->open = vhci_open_dev;
hdev->close = vhci_close_dev;

View File

@ -166,8 +166,8 @@ static struct dma_chan *dma_client_chan_alloc(struct dma_client *client)
}
/**
* dma_client_chan_free - release a DMA channel
* @chan: &dma_chan
* dma_chan_cleanup - release a DMA channel's resources
* @kref: kernel reference structure that contains the DMA channel device
*/
void dma_chan_cleanup(struct kref *kref)
{
@ -199,7 +199,7 @@ static void dma_client_chan_free(struct dma_chan *chan)
* dma_chans_rebalance - reallocate channels to clients
*
* When the number of DMA channel in the system changes,
* channels need to be rebalanced among clients
* channels need to be rebalanced among clients.
*/
static void dma_chans_rebalance(void)
{
@ -264,7 +264,7 @@ struct dma_client *dma_async_client_register(dma_event_callback event_callback)
/**
* dma_async_client_unregister - unregister a client and free the &dma_client
* @client:
* @client: &dma_client to free
*
* Force frees any allocated DMA channels, frees the &dma_client memory
*/
@ -306,7 +306,7 @@ void dma_async_client_chan_request(struct dma_client *client,
}
/**
* dma_async_device_register -
* dma_async_device_register - registers DMA devices found
* @device: &dma_device
*/
int dma_async_device_register(struct dma_device *device)
@ -348,8 +348,8 @@ int dma_async_device_register(struct dma_device *device)
}
/**
* dma_async_device_unregister -
* @device: &dma_device
* dma_async_device_cleanup - function called when all references are released
* @kref: kernel reference object
*/
static void dma_async_device_cleanup(struct kref *kref)
{
@ -359,7 +359,11 @@ static void dma_async_device_cleanup(struct kref *kref)
complete(&device->done);
}
void dma_async_device_unregister(struct dma_device* device)
/**
* dma_async_device_unregister - unregisters DMA devices
* @device: &dma_device
*/
void dma_async_device_unregister(struct dma_device *device)
{
struct dma_chan *chan;
unsigned long flags;

View File

@ -217,7 +217,7 @@ static void ioat_dma_free_chan_resources(struct dma_chan *chan)
/**
* do_ioat_dma_memcpy - actual function that initiates a IOAT DMA transaction
* @chan: IOAT DMA channel handle
* @ioat_chan: IOAT DMA channel handle
* @dest: DMA destination address
* @src: DMA source address
* @len: transaction length in bytes
@ -383,7 +383,7 @@ static dma_cookie_t ioat_dma_memcpy_buf_to_pg(struct dma_chan *chan,
* @dest_off: offset into that page
* @src_pg: pointer to the page to copy from
* @src_off: offset into that page
* @len: transaction length in bytes. This is guaranteed to not make a copy
* @len: transaction length in bytes. This is guaranteed not to make a copy
* across a page boundary.
*/
@ -407,7 +407,7 @@ static dma_cookie_t ioat_dma_memcpy_pg_to_pg(struct dma_chan *chan,
}
/**
* ioat_dma_memcpy_issue_pending - push potentially unrecognoized appended descriptors to hw
* ioat_dma_memcpy_issue_pending - push potentially unrecognized appended descriptors to hw
* @chan: DMA channel handle
*/
@ -510,6 +510,8 @@ static void ioat_dma_memcpy_cleanup(struct ioat_dma_chan *chan)
* ioat_dma_is_complete - poll the status of a IOAT DMA transaction
* @chan: IOAT DMA channel handle
* @cookie: DMA transaction identifier
* @done: if not %NULL, updated with last completed transaction
* @used: if not %NULL, updated with last used transaction
*/
static enum dma_status ioat_dma_is_complete(struct dma_chan *chan,
@ -826,7 +828,7 @@ static int __init ioat_init_module(void)
/* if forced, worst case is that rmmod hangs */
__unsafe(THIS_MODULE);
pci_module_init(&ioat_pci_drv);
return pci_module_init(&ioat_pci_drv);
}
module_init(ioat_init_module);

View File

@ -76,7 +76,7 @@
#define IOAT_CHANSTS_OFFSET 0x04 /* 64-bit Channel Status Register */
#define IOAT_CHANSTS_OFFSET_LOW 0x04
#define IOAT_CHANSTS_OFFSET_HIGH 0x08
#define IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR 0xFFFFFFFFFFFFFFC0
#define IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR 0xFFFFFFFFFFFFFFC0UL
#define IOAT_CHANSTS_SOFT_ERR 0x0000000000000010
#define IOAT_CHANSTS_DMA_TRANSFER_STATUS 0x0000000000000007
#define IOAT_CHANSTS_DMA_TRANSFER_STATUS_ACTIVE 0x0

View File

@ -31,7 +31,7 @@
#include <asm/io.h>
#include <asm/uaccess.h>
int num_pages_spanned(struct iovec *iov)
static int num_pages_spanned(struct iovec *iov)
{
return
((PAGE_ALIGN((unsigned long)iov->iov_base + iov->iov_len) -

View File

@ -68,8 +68,8 @@
#define DRV_MODULE_NAME "tg3"
#define PFX DRV_MODULE_NAME ": "
#define DRV_MODULE_VERSION "3.61"
#define DRV_MODULE_RELDATE "June 29, 2006"
#define DRV_MODULE_VERSION "3.62"
#define DRV_MODULE_RELDATE "June 30, 2006"
#define TG3_DEF_MAC_MODE 0
#define TG3_DEF_RX_MODE 0
@ -3798,18 +3798,24 @@ static int tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
goto out_unlock;
}
tcp_opt_len = ((skb->h.th->doff - 5) * 4);
ip_tcp_len = (skb->nh.iph->ihl * 4) + sizeof(struct tcphdr);
if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
mss |= (skb_headlen(skb) - ETH_HLEN) << 9;
else {
tcp_opt_len = ((skb->h.th->doff - 5) * 4);
ip_tcp_len = (skb->nh.iph->ihl * 4) +
sizeof(struct tcphdr);
skb->nh.iph->check = 0;
skb->nh.iph->tot_len = htons(mss + ip_tcp_len +
tcp_opt_len);
mss |= (ip_tcp_len + tcp_opt_len) << 9;
}
base_flags |= (TXD_FLAG_CPU_PRE_DMA |
TXD_FLAG_CPU_POST_DMA);
skb->nh.iph->check = 0;
skb->nh.iph->tot_len = htons(mss + ip_tcp_len + tcp_opt_len);
skb->h.th->check = 0;
mss |= (ip_tcp_len + tcp_opt_len) << 9;
}
else if (skb->ip_summed == CHECKSUM_HW)
base_flags |= TXD_FLAG_TCPUDP_CSUM;
@ -7887,6 +7893,12 @@ static int tg3_set_tso(struct net_device *dev, u32 value)
return -EINVAL;
return 0;
}
if (tp->tg3_flags2 & TG3_FLG2_HW_TSO_2) {
if (value)
dev->features |= NETIF_F_TSO6;
else
dev->features &= ~NETIF_F_TSO6;
}
return ethtool_op_set_tso(dev, value);
}
#endif
@ -11507,8 +11519,11 @@ static int __devinit tg3_init_one(struct pci_dev *pdev,
* Firmware TSO on older chips gives lower performance, so it
* is off by default, but can be enabled using ethtool.
*/
if (tp->tg3_flags2 & TG3_FLG2_HW_TSO)
if (tp->tg3_flags2 & TG3_FLG2_HW_TSO) {
dev->features |= NETIF_F_TSO;
if (tp->tg3_flags2 & TG3_FLG2_HW_TSO_2)
dev->features |= NETIF_F_TSO6;
}
#endif

View File

@ -44,7 +44,7 @@ enum dma_event {
};
/**
* typedef dma_cookie_t
* typedef dma_cookie_t - an opaque DMA cookie
*
* if dma_cookie_t is >0 it's a DMA request cookie, <0 it's an error code
*/
@ -80,14 +80,14 @@ struct dma_chan_percpu {
/**
* struct dma_chan - devices supply DMA channels, clients use them
* @client: ptr to the client user of this chan, will be NULL when unused
* @device: ptr to the dma device who supplies this channel, always !NULL
* @client: ptr to the client user of this chan, will be %NULL when unused
* @device: ptr to the dma device who supplies this channel, always !%NULL
* @cookie: last cookie value returned to client
* @chan_id:
* @class_dev:
* @chan_id: channel ID for sysfs
* @class_dev: class device for sysfs
* @refcount: kref, used in "bigref" slow-mode
* @slow_ref:
* @rcu:
* @slow_ref: indicates that the DMA channel is free
* @rcu: the DMA channel's RCU head
* @client_node: used to add this to the client chan list
* @device_node: used to add this to the device chan list
* @local: per-cpu pointer to a struct dma_chan_percpu
@ -162,10 +162,17 @@ struct dma_client {
* @chancnt: how many DMA channels are supported
* @channels: the list of struct dma_chan
* @global_node: list_head for global dma_device_list
* @refcount:
* @done:
* @dev_id:
* Other func ptrs: used to make use of this device's capabilities
* @refcount: reference count
* @done: IO completion struct
* @dev_id: unique device ID
* @device_alloc_chan_resources: allocate resources and return the
* number of allocated descriptors
* @device_free_chan_resources: release DMA channel's resources
* @device_memcpy_buf_to_buf: memcpy buf pointer to buf pointer
* @device_memcpy_buf_to_pg: memcpy buf pointer to struct page
* @device_memcpy_pg_to_pg: memcpy struct page/offset to struct page/offset
* @device_memcpy_complete: poll the status of an IOAT DMA transaction
* @device_memcpy_issue_pending: push appended descriptors to hardware
*/
struct dma_device {
@ -211,7 +218,7 @@ void dma_async_client_chan_request(struct dma_client *client,
* Both @dest and @src must be mappable to a bus address according to the
* DMA mapping API rules for streaming mappings.
* Both @dest and @src must stay memory resident (kernel memory or locked
* user space pages)
* user space pages).
*/
static inline dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan,
void *dest, void *src, size_t len)
@ -225,7 +232,7 @@ static inline dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan,
}
/**
* dma_async_memcpy_buf_to_pg - offloaded copy
* dma_async_memcpy_buf_to_pg - offloaded copy from address to page
* @chan: DMA channel to offload copy to
* @page: destination page
* @offset: offset in page to copy to
@ -250,18 +257,18 @@ static inline dma_cookie_t dma_async_memcpy_buf_to_pg(struct dma_chan *chan,
}
/**
* dma_async_memcpy_buf_to_pg - offloaded copy
* dma_async_memcpy_pg_to_pg - offloaded copy from page to page
* @chan: DMA channel to offload copy to
* @dest_page: destination page
* @dest_pg: destination page
* @dest_off: offset in page to copy to
* @src_page: source page
* @src_pg: source page
* @src_off: offset in page to copy from
* @len: length
*
* Both @dest_page/@dest_off and @src_page/@src_off must be mappable to a bus
* address according to the DMA mapping API rules for streaming mappings.
* Both @dest_page/@dest_off and @src_page/@src_off must stay memory resident
* (kernel memory or locked user space pages)
* (kernel memory or locked user space pages).
*/
static inline dma_cookie_t dma_async_memcpy_pg_to_pg(struct dma_chan *chan,
struct page *dest_pg, unsigned int dest_off, struct page *src_pg,
@ -278,7 +285,7 @@ static inline dma_cookie_t dma_async_memcpy_pg_to_pg(struct dma_chan *chan,
/**
* dma_async_memcpy_issue_pending - flush pending copies to HW
* @chan:
* @chan: target DMA channel
*
* This allows drivers to push copies to HW in batches,
* reducing MMIO writes where possible.

View File

@ -44,6 +44,11 @@
/********** drivers/atm/ **********/
#define ATM_POISON_FREE 0x12
#define ATM_POISON 0xdeadbeef
/********** net/ **********/
#define NEIGHBOR_DEAD 0xdeadbeef
#define NETFILTER_LINK_POISON 0xdead57ac
/********** kernel/mutexes **********/
#define MUTEX_DEBUG_INIT 0x11

View File

@ -182,14 +182,26 @@ typedef struct {
typedef struct ax25_route {
struct ax25_route *next;
atomic_t ref;
atomic_t refcount;
ax25_address callsign;
struct net_device *dev;
ax25_digi *digipeat;
char ip_mode;
struct timer_list timer;
} ax25_route;
static inline void ax25_hold_route(ax25_route *ax25_rt)
{
atomic_inc(&ax25_rt->refcount);
}
extern void __ax25_put_route(ax25_route *ax25_rt);
static inline void ax25_put_route(ax25_route *ax25_rt)
{
if (atomic_dec_and_test(&ax25_rt->refcount))
__ax25_put_route(ax25_rt);
}
typedef struct {
char slave; /* slave_mode? */
struct timer_list slave_timer; /* timeout timer */
@ -348,17 +360,11 @@ extern int ax25_check_iframes_acked(ax25_cb *, unsigned short);
extern void ax25_rt_device_down(struct net_device *);
extern int ax25_rt_ioctl(unsigned int, void __user *);
extern struct file_operations ax25_route_fops;
extern ax25_route *ax25_get_route(ax25_address *addr, struct net_device *dev);
extern int ax25_rt_autobind(ax25_cb *, ax25_address *);
extern ax25_route *ax25_rt_find_route(ax25_route *, ax25_address *,
struct net_device *);
extern struct sk_buff *ax25_rt_build_path(struct sk_buff *, ax25_address *, ax25_address *, ax25_digi *);
extern void ax25_rt_free(void);
static inline void ax25_put_route(ax25_route *ax25_rt)
{
atomic_dec(&ax25_rt->ref);
}
/* ax25_std_in.c */
extern int ax25_std_frame_in(ax25_cb *, struct sk_buff *, int);

View File

@ -175,6 +175,6 @@ extern int hci_sock_cleanup(void);
extern int bt_sysfs_init(void);
extern void bt_sysfs_cleanup(void);
extern struct class bt_class;
extern struct class *bt_class;
#endif /* __BLUETOOTH_H */

View File

@ -54,7 +54,8 @@
/* HCI device quirks */
enum {
HCI_QUIRK_RESET_ON_INIT,
HCI_QUIRK_RAW_DEVICE
HCI_QUIRK_RAW_DEVICE,
HCI_QUIRK_FIXUP_BUFFER_SIZE
};
/* HCI device flags */
@ -100,9 +101,10 @@ enum {
#define HCIINQUIRY _IOR('H', 240, int)
/* HCI timeouts */
#define HCI_CONN_TIMEOUT (HZ * 40)
#define HCI_DISCONN_TIMEOUT (HZ * 2)
#define HCI_CONN_IDLE_TIMEOUT (HZ * 60)
#define HCI_CONNECT_TIMEOUT (40000) /* 40 seconds */
#define HCI_DISCONN_TIMEOUT (2000) /* 2 seconds */
#define HCI_IDLE_TIMEOUT (6000) /* 6 seconds */
#define HCI_INIT_TIMEOUT (10000) /* 10 seconds */
/* HCI Packet types */
#define HCI_COMMAND_PKT 0x01
@ -144,7 +146,7 @@ enum {
#define LMP_TACCURACY 0x10
#define LMP_RSWITCH 0x20
#define LMP_HOLD 0x40
#define LMP_SNIF 0x80
#define LMP_SNIFF 0x80
#define LMP_PARK 0x01
#define LMP_RSSI 0x02
@ -159,13 +161,21 @@ enum {
#define LMP_PSCHEME 0x02
#define LMP_PCONTROL 0x04
#define LMP_SNIFF_SUBR 0x02
/* Connection modes */
#define HCI_CM_ACTIVE 0x0000
#define HCI_CM_HOLD 0x0001
#define HCI_CM_SNIFF 0x0002
#define HCI_CM_PARK 0x0003
/* Link policies */
#define HCI_LP_RSWITCH 0x0001
#define HCI_LP_HOLD 0x0002
#define HCI_LP_SNIFF 0x0004
#define HCI_LP_PARK 0x0008
/* Link mode */
/* Link modes */
#define HCI_LM_ACCEPT 0x8000
#define HCI_LM_MASTER 0x0001
#define HCI_LM_AUTH 0x0002
@ -191,7 +201,7 @@ struct hci_rp_read_loc_version {
} __attribute__ ((packed));
#define OCF_READ_LOCAL_FEATURES 0x0003
struct hci_rp_read_loc_features {
struct hci_rp_read_local_features {
__u8 status;
__u8 features[8];
} __attribute__ ((packed));
@ -375,17 +385,32 @@ struct hci_cp_change_conn_link_key {
} __attribute__ ((packed));
#define OCF_READ_REMOTE_FEATURES 0x001B
struct hci_cp_read_rmt_features {
struct hci_cp_read_remote_features {
__le16 handle;
} __attribute__ ((packed));
#define OCF_READ_REMOTE_VERSION 0x001D
struct hci_cp_read_rmt_version {
struct hci_cp_read_remote_version {
__le16 handle;
} __attribute__ ((packed));
/* Link Policy */
#define OGF_LINK_POLICY 0x02
#define OGF_LINK_POLICY 0x02
#define OCF_SNIFF_MODE 0x0003
struct hci_cp_sniff_mode {
__le16 handle;
__le16 max_interval;
__le16 min_interval;
__le16 attempt;
__le16 timeout;
} __attribute__ ((packed));
#define OCF_EXIT_SNIFF_MODE 0x0004
struct hci_cp_exit_sniff_mode {
__le16 handle;
} __attribute__ ((packed));
#define OCF_ROLE_DISCOVERY 0x0009
struct hci_cp_role_discovery {
__le16 handle;
@ -406,7 +431,7 @@ struct hci_rp_read_link_policy {
__le16 policy;
} __attribute__ ((packed));
#define OCF_SWITCH_ROLE 0x000B
#define OCF_SWITCH_ROLE 0x000B
struct hci_cp_switch_role {
bdaddr_t bdaddr;
__u8 role;
@ -422,6 +447,14 @@ struct hci_rp_write_link_policy {
__le16 handle;
} __attribute__ ((packed));
#define OCF_SNIFF_SUBRATE 0x0011
struct hci_cp_sniff_subrate {
__le16 handle;
__le16 max_latency;
__le16 min_remote_timeout;
__le16 min_local_timeout;
} __attribute__ ((packed));
/* Status params */
#define OGF_STATUS_PARAM 0x05
@ -581,15 +614,15 @@ struct hci_ev_link_key_notify {
__u8 key_type;
} __attribute__ ((packed));
#define HCI_EV_RMT_FEATURES 0x0B
struct hci_ev_rmt_features {
#define HCI_EV_REMOTE_FEATURES 0x0B
struct hci_ev_remote_features {
__u8 status;
__le16 handle;
__u8 features[8];
} __attribute__ ((packed));
#define HCI_EV_RMT_VERSION 0x0C
struct hci_ev_rmt_version {
#define HCI_EV_REMOTE_VERSION 0x0C
struct hci_ev_remote_version {
__u8 status;
__le16 handle;
__u8 lmp_ver;
@ -610,6 +643,16 @@ struct hci_ev_pscan_rep_mode {
__u8 pscan_rep_mode;
} __attribute__ ((packed));
#define HCI_EV_SNIFF_SUBRATE 0x2E
struct hci_ev_sniff_subrate {
__u8 status;
__le16 handle;
__le16 max_tx_latency;
__le16 max_rx_latency;
__le16 max_remote_timeout;
__le16 max_local_timeout;
} __attribute__ ((packed));
/* Internal events generated by Bluetooth stack */
#define HCI_EV_STACK_INTERNAL 0xFD
struct hci_ev_stack_internal {

View File

@ -31,10 +31,7 @@
#define HCI_PROTO_L2CAP 0
#define HCI_PROTO_SCO 1
#define HCI_INIT_TIMEOUT (HZ * 10)
/* HCI Core structures */
struct inquiry_data {
bdaddr_t bdaddr;
__u8 pscan_rep_mode;
@ -81,6 +78,10 @@ struct hci_dev {
__u16 link_policy;
__u16 link_mode;
__u32 idle_timeout;
__u16 sniff_min_interval;
__u16 sniff_max_interval;
unsigned long quirks;
atomic_t cmd_cnt;
@ -123,7 +124,8 @@ struct hci_dev {
atomic_t promisc;
struct class_device class_dev;
struct device *parent;
struct device dev;
struct module *owner;
@ -145,18 +147,24 @@ struct hci_conn {
bdaddr_t dst;
__u16 handle;
__u16 state;
__u8 mode;
__u8 type;
__u8 out;
__u8 dev_class[3];
__u8 features[8];
__u16 interval;
__u16 link_policy;
__u32 link_mode;
__u8 power_save;
unsigned long pend;
unsigned int sent;
struct sk_buff_head data_q;
struct timer_list timer;
struct timer_list disc_timer;
struct timer_list idle_timer;
struct hci_dev *hdev;
void *l2cap_data;
void *sco_data;
@ -211,7 +219,8 @@ void hci_inquiry_cache_update(struct hci_dev *hdev, struct inquiry_data *data);
enum {
HCI_CONN_AUTH_PEND,
HCI_CONN_ENCRYPT_PEND,
HCI_CONN_RSWITCH_PEND
HCI_CONN_RSWITCH_PEND,
HCI_CONN_MODE_CHANGE_PEND,
};
static inline void hci_conn_hash_init(struct hci_dev *hdev)
@ -286,31 +295,27 @@ int hci_conn_encrypt(struct hci_conn *conn);
int hci_conn_change_link_key(struct hci_conn *conn);
int hci_conn_switch_role(struct hci_conn *conn, uint8_t role);
static inline void hci_conn_set_timer(struct hci_conn *conn, unsigned long timeout)
{
mod_timer(&conn->timer, jiffies + timeout);
}
static inline void hci_conn_del_timer(struct hci_conn *conn)
{
del_timer(&conn->timer);
}
void hci_conn_enter_active_mode(struct hci_conn *conn);
void hci_conn_enter_sniff_mode(struct hci_conn *conn);
static inline void hci_conn_hold(struct hci_conn *conn)
{
atomic_inc(&conn->refcnt);
hci_conn_del_timer(conn);
del_timer(&conn->disc_timer);
}
static inline void hci_conn_put(struct hci_conn *conn)
{
if (atomic_dec_and_test(&conn->refcnt)) {
unsigned long timeo;
if (conn->type == ACL_LINK) {
unsigned long timeo = (conn->out) ?
HCI_DISCONN_TIMEOUT : HCI_DISCONN_TIMEOUT * 2;
hci_conn_set_timer(conn, timeo);
timeo = msecs_to_jiffies(HCI_DISCONN_TIMEOUT);
if (!conn->out)
timeo *= 2;
del_timer(&conn->idle_timer);
} else
hci_conn_set_timer(conn, HZ / 100);
timeo = msecs_to_jiffies(10);
mod_timer(&conn->disc_timer, jiffies + timeo);
}
}
@ -408,11 +413,13 @@ static inline int hci_recv_frame(struct sk_buff *skb)
int hci_register_sysfs(struct hci_dev *hdev);
void hci_unregister_sysfs(struct hci_dev *hdev);
#define SET_HCIDEV_DEV(hdev, pdev) ((hdev)->class_dev.dev = (pdev))
#define SET_HCIDEV_DEV(hdev, pdev) ((hdev)->parent = (pdev))
/* ----- LMP capabilities ----- */
#define lmp_rswitch_capable(dev) (dev->features[0] & LMP_RSWITCH)
#define lmp_encrypt_capable(dev) (dev->features[0] & LMP_ENCRYPT)
#define lmp_rswitch_capable(dev) ((dev)->features[0] & LMP_RSWITCH)
#define lmp_encrypt_capable(dev) ((dev)->features[0] & LMP_ENCRYPT)
#define lmp_sniff_capable(dev) ((dev)->features[0] & LMP_SNIFF)
#define lmp_sniffsubr_capable(dev) ((dev)->features[5] & LMP_SNIFF_SUBR)
/* ----- HCI protocols ----- */
struct hci_proto {

View File

@ -23,6 +23,7 @@
#include <linux/if.h> /* for IFF_UP */
#include <linux/inetdevice.h>
#include <linux/bitops.h>
#include <linux/poison.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/rcupdate.h>
@ -266,7 +267,7 @@ static void clip_neigh_destroy(struct neighbour *neigh)
DPRINTK("clip_neigh_destroy (neigh %p)\n", neigh);
if (NEIGH2ENTRY(neigh)->vccs)
printk(KERN_CRIT "clip_neigh_destroy: vccs != NULL !!!\n");
NEIGH2ENTRY(neigh)->vccs = (void *) 0xdeadbeef;
NEIGH2ENTRY(neigh)->vccs = (void *) NEIGHBOR_DEAD;
}
static void clip_neigh_solicit(struct neighbour *neigh, struct sk_buff *skb)

View File

@ -103,11 +103,13 @@ int ax25_rebuild_header(struct sk_buff *skb)
{
struct sk_buff *ourskb;
unsigned char *bp = skb->data;
struct net_device *dev;
ax25_route *route;
struct net_device *dev = NULL;
ax25_address *src, *dst;
ax25_digi *digipeat = NULL;
ax25_dev *ax25_dev;
ax25_route _route, *route = &_route;
ax25_cb *ax25;
char ip_mode = ' ';
dst = (ax25_address *)(bp + 1);
src = (ax25_address *)(bp + 8);
@ -115,8 +117,12 @@ int ax25_rebuild_header(struct sk_buff *skb)
if (arp_find(bp + 1, skb))
return 1;
route = ax25_rt_find_route(route, dst, NULL);
dev = route->dev;
route = ax25_get_route(dst, NULL);
if (route) {
digipeat = route->digipeat;
dev = route->dev;
ip_mode = route->ip_mode;
};
if (dev == NULL)
dev = skb->dev;
@ -126,7 +132,7 @@ int ax25_rebuild_header(struct sk_buff *skb)
}
if (bp[16] == AX25_P_IP) {
if (route->ip_mode == 'V' || (route->ip_mode == ' ' && ax25_dev->values[AX25_VALUES_IPDEFMODE])) {
if (ip_mode == 'V' || (ip_mode == ' ' && ax25_dev->values[AX25_VALUES_IPDEFMODE])) {
/*
* We copy the buffer and release the original thereby
* keeping it straight
@ -172,7 +178,7 @@ int ax25_rebuild_header(struct sk_buff *skb)
ourskb,
ax25_dev->values[AX25_VALUES_PACLEN],
&src_c,
&dst_c, route->digipeat, dev);
&dst_c, digipeat, dev);
if (ax25) {
ax25_cb_put(ax25);
}
@ -190,7 +196,7 @@ int ax25_rebuild_header(struct sk_buff *skb)
skb_pull(skb, AX25_KISS_HEADER_LEN);
if (route->digipeat != NULL) {
if (digipeat != NULL) {
if ((ourskb = ax25_rt_build_path(skb, src, dst, route->digipeat)) == NULL) {
kfree_skb(skb);
goto put;
@ -202,7 +208,8 @@ int ax25_rebuild_header(struct sk_buff *skb)
ax25_queue_xmit(skb, dev);
put:
ax25_put_route(route);
if (route)
ax25_put_route(route);
return 1;
}

View File

@ -41,8 +41,6 @@
static ax25_route *ax25_route_list;
static DEFINE_RWLOCK(ax25_route_lock);
static ax25_route *ax25_get_route(ax25_address *, struct net_device *);
void ax25_rt_device_down(struct net_device *dev)
{
ax25_route *s, *t, *ax25_rt;
@ -115,7 +113,7 @@ static int ax25_rt_add(struct ax25_routes_struct *route)
return -ENOMEM;
}
atomic_set(&ax25_rt->ref, 0);
atomic_set(&ax25_rt->refcount, 1);
ax25_rt->callsign = route->dest_addr;
ax25_rt->dev = ax25_dev->dev;
ax25_rt->digipeat = NULL;
@ -140,23 +138,10 @@ static int ax25_rt_add(struct ax25_routes_struct *route)
return 0;
}
static void ax25_rt_destroy(ax25_route *ax25_rt)
void __ax25_put_route(ax25_route *ax25_rt)
{
if (atomic_read(&ax25_rt->ref) == 0) {
kfree(ax25_rt->digipeat);
kfree(ax25_rt);
return;
}
/*
* Uh... Route is still in use; we can't yet destroy it. Retry later.
*/
init_timer(&ax25_rt->timer);
ax25_rt->timer.data = (unsigned long) ax25_rt;
ax25_rt->timer.function = (void *) ax25_rt_destroy;
ax25_rt->timer.expires = jiffies + 5 * HZ;
add_timer(&ax25_rt->timer);
kfree(ax25_rt->digipeat);
kfree(ax25_rt);
}
static int ax25_rt_del(struct ax25_routes_struct *route)
@ -177,12 +162,12 @@ static int ax25_rt_del(struct ax25_routes_struct *route)
ax25cmp(&route->dest_addr, &s->callsign) == 0) {
if (ax25_route_list == s) {
ax25_route_list = s->next;
ax25_rt_destroy(s);
ax25_put_route(s);
} else {
for (t = ax25_route_list; t != NULL; t = t->next) {
if (t->next == s) {
t->next = s->next;
ax25_rt_destroy(s);
ax25_put_route(s);
break;
}
}
@ -362,7 +347,7 @@ struct file_operations ax25_route_fops = {
*
* Only routes with a reference count of zero can be destroyed.
*/
static ax25_route *ax25_get_route(ax25_address *addr, struct net_device *dev)
ax25_route *ax25_get_route(ax25_address *addr, struct net_device *dev)
{
ax25_route *ax25_spe_rt = NULL;
ax25_route *ax25_def_rt = NULL;
@ -392,7 +377,7 @@ static ax25_route *ax25_get_route(ax25_address *addr, struct net_device *dev)
ax25_rt = ax25_spe_rt;
if (ax25_rt != NULL)
atomic_inc(&ax25_rt->ref);
ax25_hold_route(ax25_rt);
read_unlock(&ax25_route_lock);
@ -467,24 +452,6 @@ put:
return 0;
}
ax25_route *ax25_rt_find_route(ax25_route * route, ax25_address *addr,
struct net_device *dev)
{
ax25_route *ax25_rt;
if ((ax25_rt = ax25_get_route(addr, dev)))
return ax25_rt;
route->next = NULL;
atomic_set(&route->ref, 1);
route->callsign = *addr;
route->dev = dev;
route->digipeat = NULL;
route->ip_mode = ' ';
return route;
}
struct sk_buff *ax25_rt_build_path(struct sk_buff *skb, ax25_address *src,
ax25_address *dest, ax25_digi *digi)
{

View File

@ -48,7 +48,7 @@
#define BT_DBG(D...)
#endif
#define VERSION "2.8"
#define VERSION "2.10"
/* Bluetooth sockets */
#define BT_MAX_PROTO 8
@ -307,14 +307,22 @@ static struct net_proto_family bt_sock_family_ops = {
static int __init bt_init(void)
{
int err;
BT_INFO("Core ver %s", VERSION);
sock_register(&bt_sock_family_ops);
err = bt_sysfs_init();
if (err < 0)
return err;
err = sock_register(&bt_sock_family_ops);
if (err < 0) {
bt_sysfs_cleanup();
return err;
}
BT_INFO("HCI device and connection manager initialized");
bt_sysfs_init();
hci_sock_init();
return 0;
@ -324,9 +332,9 @@ static void __exit bt_exit(void)
{
hci_sock_cleanup();
bt_sysfs_cleanup();
sock_unregister(PF_BLUETOOTH);
bt_sysfs_cleanup();
}
subsys_initcall(bt_init);

View File

@ -115,8 +115,8 @@ void hci_add_sco(struct hci_conn *conn, __u16 handle)
static void hci_conn_timeout(unsigned long arg)
{
struct hci_conn *conn = (void *)arg;
struct hci_dev *hdev = conn->hdev;
struct hci_conn *conn = (void *) arg;
struct hci_dev *hdev = conn->hdev;
BT_DBG("conn %p state %d", conn, conn->state);
@ -132,11 +132,13 @@ static void hci_conn_timeout(unsigned long arg)
return;
}
static void hci_conn_init_timer(struct hci_conn *conn)
static void hci_conn_idle(unsigned long arg)
{
init_timer(&conn->timer);
conn->timer.function = hci_conn_timeout;
conn->timer.data = (unsigned long)conn;
struct hci_conn *conn = (void *) arg;
BT_DBG("conn %p mode %d", conn, conn->mode);
hci_conn_enter_sniff_mode(conn);
}
struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst)
@ -145,17 +147,27 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst)
BT_DBG("%s dst %s", hdev->name, batostr(dst));
if (!(conn = kmalloc(sizeof(struct hci_conn), GFP_ATOMIC)))
conn = kzalloc(sizeof(struct hci_conn), GFP_ATOMIC);
if (!conn)
return NULL;
memset(conn, 0, sizeof(struct hci_conn));
bacpy(&conn->dst, dst);
conn->type = type;
conn->hdev = hdev;
conn->type = type;
conn->mode = HCI_CM_ACTIVE;
conn->state = BT_OPEN;
conn->power_save = 1;
skb_queue_head_init(&conn->data_q);
hci_conn_init_timer(conn);
init_timer(&conn->disc_timer);
conn->disc_timer.function = hci_conn_timeout;
conn->disc_timer.data = (unsigned long) conn;
init_timer(&conn->idle_timer);
conn->idle_timer.function = hci_conn_idle;
conn->idle_timer.data = (unsigned long) conn;
atomic_set(&conn->refcnt, 0);
@ -178,7 +190,9 @@ int hci_conn_del(struct hci_conn *conn)
BT_DBG("%s conn %p handle %d", hdev->name, conn, conn->handle);
hci_conn_del_timer(conn);
del_timer(&conn->idle_timer);
del_timer(&conn->disc_timer);
if (conn->type == SCO_LINK) {
struct hci_conn *acl = conn->link;
@ -364,6 +378,70 @@ int hci_conn_switch_role(struct hci_conn *conn, uint8_t role)
}
EXPORT_SYMBOL(hci_conn_switch_role);
/* Enter active mode */
void hci_conn_enter_active_mode(struct hci_conn *conn)
{
struct hci_dev *hdev = conn->hdev;
BT_DBG("conn %p mode %d", conn, conn->mode);
if (test_bit(HCI_RAW, &hdev->flags))
return;
if (conn->mode != HCI_CM_SNIFF || !conn->power_save)
goto timer;
if (!test_and_set_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) {
struct hci_cp_exit_sniff_mode cp;
cp.handle = __cpu_to_le16(conn->handle);
hci_send_cmd(hdev, OGF_LINK_POLICY,
OCF_EXIT_SNIFF_MODE, sizeof(cp), &cp);
}
timer:
if (hdev->idle_timeout > 0)
mod_timer(&conn->idle_timer,
jiffies + msecs_to_jiffies(hdev->idle_timeout));
}
/* Enter sniff mode */
void hci_conn_enter_sniff_mode(struct hci_conn *conn)
{
struct hci_dev *hdev = conn->hdev;
BT_DBG("conn %p mode %d", conn, conn->mode);
if (test_bit(HCI_RAW, &hdev->flags))
return;
if (!lmp_sniff_capable(hdev) || !lmp_sniff_capable(conn))
return;
if (conn->mode != HCI_CM_ACTIVE || !(conn->link_policy & HCI_LP_SNIFF))
return;
if (lmp_sniffsubr_capable(hdev) && lmp_sniffsubr_capable(conn)) {
struct hci_cp_sniff_subrate cp;
cp.handle = __cpu_to_le16(conn->handle);
cp.max_latency = __constant_cpu_to_le16(0);
cp.min_remote_timeout = __constant_cpu_to_le16(0);
cp.min_local_timeout = __constant_cpu_to_le16(0);
hci_send_cmd(hdev, OGF_LINK_POLICY,
OCF_SNIFF_SUBRATE, sizeof(cp), &cp);
}
if (!test_and_set_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) {
struct hci_cp_sniff_mode cp;
cp.handle = __cpu_to_le16(conn->handle);
cp.max_interval = __cpu_to_le16(hdev->sniff_max_interval);
cp.min_interval = __cpu_to_le16(hdev->sniff_min_interval);
cp.attempt = __constant_cpu_to_le16(4);
cp.timeout = __constant_cpu_to_le16(1);
hci_send_cmd(hdev, OGF_LINK_POLICY,
OCF_SNIFF_MODE, sizeof(cp), &cp);
}
}
/* Drop all connection on the device */
void hci_conn_hash_flush(struct hci_dev *hdev)
{

View File

@ -411,7 +411,7 @@ int hci_inquiry(void __user *arg)
}
hci_dev_unlock_bh(hdev);
timeo = ir.length * 2 * HZ;
timeo = ir.length * msecs_to_jiffies(2000);
if (do_inquiry && (err = hci_request(hdev, hci_inq_req, (unsigned long)&ir, timeo)) < 0)
goto done;
@ -479,7 +479,8 @@ int hci_dev_open(__u16 dev)
set_bit(HCI_INIT, &hdev->flags);
//__hci_request(hdev, hci_reset_req, 0, HZ);
ret = __hci_request(hdev, hci_init_req, 0, HCI_INIT_TIMEOUT);
ret = __hci_request(hdev, hci_init_req, 0,
msecs_to_jiffies(HCI_INIT_TIMEOUT));
clear_bit(HCI_INIT, &hdev->flags);
}
@ -546,7 +547,8 @@ static int hci_dev_do_close(struct hci_dev *hdev)
atomic_set(&hdev->cmd_cnt, 1);
if (!test_bit(HCI_RAW, &hdev->flags)) {
set_bit(HCI_INIT, &hdev->flags);
__hci_request(hdev, hci_reset_req, 0, HZ/4);
__hci_request(hdev, hci_reset_req, 0,
msecs_to_jiffies(250));
clear_bit(HCI_INIT, &hdev->flags);
}
@ -619,7 +621,8 @@ int hci_dev_reset(__u16 dev)
hdev->acl_cnt = 0; hdev->sco_cnt = 0;
if (!test_bit(HCI_RAW, &hdev->flags))
ret = __hci_request(hdev, hci_reset_req, 0, HCI_INIT_TIMEOUT);
ret = __hci_request(hdev, hci_reset_req, 0,
msecs_to_jiffies(HCI_INIT_TIMEOUT));
done:
tasklet_enable(&hdev->tx_task);
@ -657,7 +660,8 @@ int hci_dev_cmd(unsigned int cmd, void __user *arg)
switch (cmd) {
case HCISETAUTH:
err = hci_request(hdev, hci_auth_req, dr.dev_opt, HCI_INIT_TIMEOUT);
err = hci_request(hdev, hci_auth_req, dr.dev_opt,
msecs_to_jiffies(HCI_INIT_TIMEOUT));
break;
case HCISETENCRYPT:
@ -668,18 +672,19 @@ int hci_dev_cmd(unsigned int cmd, void __user *arg)
if (!test_bit(HCI_AUTH, &hdev->flags)) {
/* Auth must be enabled first */
err = hci_request(hdev, hci_auth_req,
dr.dev_opt, HCI_INIT_TIMEOUT);
err = hci_request(hdev, hci_auth_req, dr.dev_opt,
msecs_to_jiffies(HCI_INIT_TIMEOUT));
if (err)
break;
}
err = hci_request(hdev, hci_encrypt_req,
dr.dev_opt, HCI_INIT_TIMEOUT);
err = hci_request(hdev, hci_encrypt_req, dr.dev_opt,
msecs_to_jiffies(HCI_INIT_TIMEOUT));
break;
case HCISETSCAN:
err = hci_request(hdev, hci_scan_req, dr.dev_opt, HCI_INIT_TIMEOUT);
err = hci_request(hdev, hci_scan_req, dr.dev_opt,
msecs_to_jiffies(HCI_INIT_TIMEOUT));
break;
case HCISETPTYPE:
@ -812,8 +817,8 @@ void hci_free_dev(struct hci_dev *hdev)
{
skb_queue_purge(&hdev->driver_init);
/* will free via class release */
class_device_put(&hdev->class_dev);
/* will free via device release */
put_device(&hdev->dev);
}
EXPORT_SYMBOL(hci_free_dev);
@ -848,6 +853,10 @@ int hci_register_dev(struct hci_dev *hdev)
hdev->pkt_type = (HCI_DM1 | HCI_DH1 | HCI_HV1);
hdev->link_mode = (HCI_LM_ACCEPT);
hdev->idle_timeout = 0;
hdev->sniff_max_interval = 800;
hdev->sniff_min_interval = 80;
tasklet_init(&hdev->cmd_task, hci_cmd_task,(unsigned long) hdev);
tasklet_init(&hdev->rx_task, hci_rx_task, (unsigned long) hdev);
tasklet_init(&hdev->tx_task, hci_tx_task, (unsigned long) hdev);
@ -1220,6 +1229,9 @@ static inline void hci_sched_acl(struct hci_dev *hdev)
while (hdev->acl_cnt && (conn = hci_low_sent(hdev, ACL_LINK, &quote))) {
while (quote-- && (skb = skb_dequeue(&conn->data_q))) {
BT_DBG("skb %p len %d", skb, skb->len);
hci_conn_enter_active_mode(conn);
hci_send_frame(skb);
hdev->acl_last_tx = jiffies;
@ -1298,6 +1310,8 @@ static inline void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
if (conn) {
register struct hci_proto *hp;
hci_conn_enter_active_mode(conn);
/* Send to upper protocol */
if ((hp = hci_proto[HCI_PROTO_L2CAP]) && hp->recv_acldata) {
hp->recv_acldata(conn, skb, flags);

View File

@ -83,6 +83,8 @@ static void hci_cc_link_policy(struct hci_dev *hdev, __u16 ocf, struct sk_buff *
{
struct hci_conn *conn;
struct hci_rp_role_discovery *rd;
struct hci_rp_write_link_policy *lp;
void *sent;
BT_DBG("%s ocf 0x%x", hdev->name, ocf);
@ -106,6 +108,27 @@ static void hci_cc_link_policy(struct hci_dev *hdev, __u16 ocf, struct sk_buff *
hci_dev_unlock(hdev);
break;
case OCF_WRITE_LINK_POLICY:
sent = hci_sent_cmd_data(hdev, OGF_LINK_POLICY, OCF_WRITE_LINK_POLICY);
if (!sent)
break;
lp = (struct hci_rp_write_link_policy *) skb->data;
if (lp->status)
break;
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(lp->handle));
if (conn) {
__le16 policy = get_unaligned((__le16 *) (sent + 2));
conn->link_policy = __le16_to_cpu(policy);
}
hci_dev_unlock(hdev);
break;
default:
BT_DBG("%s: Command complete: ogf LINK_POLICY ocf %x",
hdev->name, ocf);
@ -274,7 +297,7 @@ static void hci_cc_host_ctl(struct hci_dev *hdev, __u16 ocf, struct sk_buff *skb
/* Command Complete OGF INFO_PARAM */
static void hci_cc_info_param(struct hci_dev *hdev, __u16 ocf, struct sk_buff *skb)
{
struct hci_rp_read_loc_features *lf;
struct hci_rp_read_local_features *lf;
struct hci_rp_read_buffer_size *bs;
struct hci_rp_read_bd_addr *ba;
@ -282,7 +305,7 @@ static void hci_cc_info_param(struct hci_dev *hdev, __u16 ocf, struct sk_buff *s
switch (ocf) {
case OCF_READ_LOCAL_FEATURES:
lf = (struct hci_rp_read_loc_features *) skb->data;
lf = (struct hci_rp_read_local_features *) skb->data;
if (lf->status) {
BT_DBG("%s READ_LOCAL_FEATURES failed %d", hdev->name, lf->status);
@ -319,9 +342,17 @@ static void hci_cc_info_param(struct hci_dev *hdev, __u16 ocf, struct sk_buff *s
}
hdev->acl_mtu = __le16_to_cpu(bs->acl_mtu);
hdev->sco_mtu = bs->sco_mtu ? bs->sco_mtu : 64;
hdev->acl_pkts = hdev->acl_cnt = __le16_to_cpu(bs->acl_max_pkt);
hdev->sco_pkts = hdev->sco_cnt = __le16_to_cpu(bs->sco_max_pkt);
hdev->sco_mtu = bs->sco_mtu;
hdev->acl_pkts = __le16_to_cpu(bs->acl_max_pkt);
hdev->sco_pkts = __le16_to_cpu(bs->sco_max_pkt);
if (test_bit(HCI_QUIRK_FIXUP_BUFFER_SIZE, &hdev->quirks)) {
hdev->sco_mtu = 64;
hdev->sco_pkts = 8;
}
hdev->acl_cnt = hdev->acl_pkts;
hdev->sco_cnt = hdev->sco_pkts;
BT_DBG("%s mtu: acl %d, sco %d max_pkt: acl %d, sco %d", hdev->name,
hdev->acl_mtu, hdev->sco_mtu, hdev->acl_pkts, hdev->sco_pkts);
@ -439,8 +470,46 @@ static void hci_cs_link_policy(struct hci_dev *hdev, __u16 ocf, __u8 status)
BT_DBG("%s ocf 0x%x", hdev->name, ocf);
switch (ocf) {
case OCF_SNIFF_MODE:
if (status) {
struct hci_conn *conn;
struct hci_cp_sniff_mode *cp = hci_sent_cmd_data(hdev, OGF_LINK_POLICY, OCF_SNIFF_MODE);
if (!cp)
break;
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(cp->handle));
if (conn) {
clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend);
}
hci_dev_unlock(hdev);
}
break;
case OCF_EXIT_SNIFF_MODE:
if (status) {
struct hci_conn *conn;
struct hci_cp_exit_sniff_mode *cp = hci_sent_cmd_data(hdev, OGF_LINK_POLICY, OCF_EXIT_SNIFF_MODE);
if (!cp)
break;
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(cp->handle));
if (conn) {
clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend);
}
hci_dev_unlock(hdev);
}
break;
default:
BT_DBG("%s Command status: ogf HOST_POLICY ocf %x", hdev->name, ocf);
BT_DBG("%s Command status: ogf LINK_POLICY ocf %x", hdev->name, ocf);
break;
}
}
@ -622,14 +691,16 @@ static inline void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *sk
else
cp.role = 0x01; /* Remain slave */
hci_send_cmd(hdev, OGF_LINK_CTL, OCF_ACCEPT_CONN_REQ, sizeof(cp), &cp);
hci_send_cmd(hdev, OGF_LINK_CTL,
OCF_ACCEPT_CONN_REQ, sizeof(cp), &cp);
} else {
/* Connection rejected */
struct hci_cp_reject_conn_req cp;
bacpy(&cp.bdaddr, &ev->bdaddr);
cp.reason = 0x0f;
hci_send_cmd(hdev, OGF_LINK_CTL, OCF_REJECT_CONN_REQ, sizeof(cp), &cp);
hci_send_cmd(hdev, OGF_LINK_CTL,
OCF_REJECT_CONN_REQ, sizeof(cp), &cp);
}
}
@ -637,7 +708,7 @@ static inline void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *sk
static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_conn_complete *ev = (struct hci_ev_conn_complete *) skb->data;
struct hci_conn *conn = NULL;
struct hci_conn *conn;
BT_DBG("%s", hdev->name);
@ -659,12 +730,21 @@ static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *s
if (test_bit(HCI_ENCRYPT, &hdev->flags))
conn->link_mode |= HCI_LM_ENCRYPT;
/* Get remote features */
if (conn->type == ACL_LINK) {
struct hci_cp_read_remote_features cp;
cp.handle = ev->handle;
hci_send_cmd(hdev, OGF_LINK_CTL,
OCF_READ_REMOTE_FEATURES, sizeof(cp), &cp);
}
/* Set link policy */
if (conn->type == ACL_LINK && hdev->link_policy) {
struct hci_cp_write_link_policy cp;
cp.handle = ev->handle;
cp.policy = __cpu_to_le16(hdev->link_policy);
hci_send_cmd(hdev, OGF_LINK_POLICY, OCF_WRITE_LINK_POLICY, sizeof(cp), &cp);
hci_send_cmd(hdev, OGF_LINK_POLICY,
OCF_WRITE_LINK_POLICY, sizeof(cp), &cp);
}
/* Set packet type for incoming connection */
@ -675,7 +755,8 @@ static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *s
__cpu_to_le16(hdev->pkt_type & ACL_PTYPE_MASK):
__cpu_to_le16(hdev->pkt_type & SCO_PTYPE_MASK);
hci_send_cmd(hdev, OGF_LINK_CTL, OCF_CHANGE_CONN_PTYPE, sizeof(cp), &cp);
hci_send_cmd(hdev, OGF_LINK_CTL,
OCF_CHANGE_CONN_PTYPE, sizeof(cp), &cp);
}
} else
conn->state = BT_CLOSED;
@ -703,8 +784,7 @@ static inline void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *s
static inline void hci_disconn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_disconn_complete *ev = (struct hci_ev_disconn_complete *) skb->data;
struct hci_conn *conn = NULL;
__u16 handle = __le16_to_cpu(ev->handle);
struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
@ -713,7 +793,7 @@ static inline void hci_disconn_complete_evt(struct hci_dev *hdev, struct sk_buff
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, handle);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
conn->state = BT_CLOSED;
hci_proto_disconn_ind(conn, ev->reason);
@ -770,7 +850,7 @@ static inline void hci_num_comp_pkts_evt(struct hci_dev *hdev, struct sk_buff *s
static inline void hci_role_change_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_role_change *ev = (struct hci_ev_role_change *) skb->data;
struct hci_conn *conn = NULL;
struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
@ -793,18 +873,43 @@ static inline void hci_role_change_evt(struct hci_dev *hdev, struct sk_buff *skb
hci_dev_unlock(hdev);
}
/* Authentication Complete */
static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
/* Mode Change */
static inline void hci_mode_change_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_auth_complete *ev = (struct hci_ev_auth_complete *) skb->data;
struct hci_conn *conn = NULL;
__u16 handle = __le16_to_cpu(ev->handle);
struct hci_ev_mode_change *ev = (struct hci_ev_mode_change *) skb->data;
struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, handle);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
conn->mode = ev->mode;
conn->interval = __le16_to_cpu(ev->interval);
if (!test_and_clear_bit(HCI_CONN_MODE_CHANGE_PEND, &conn->pend)) {
if (conn->mode == HCI_CM_ACTIVE)
conn->power_save = 1;
else
conn->power_save = 0;
}
}
hci_dev_unlock(hdev);
}
/* Authentication Complete */
static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_auth_complete *ev = (struct hci_ev_auth_complete *) skb->data;
struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
if (!ev->status)
conn->link_mode |= HCI_LM_AUTH;
@ -819,8 +924,7 @@ static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *s
cp.handle = __cpu_to_le16(conn->handle);
cp.encrypt = 1;
hci_send_cmd(conn->hdev, OGF_LINK_CTL,
OCF_SET_CONN_ENCRYPT,
sizeof(cp), &cp);
OCF_SET_CONN_ENCRYPT, sizeof(cp), &cp);
} else {
clear_bit(HCI_CONN_ENCRYPT_PEND, &conn->pend);
hci_encrypt_cfm(conn, ev->status, 0x00);
@ -835,14 +939,13 @@ static inline void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *s
static inline void hci_encrypt_change_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_encrypt_change *ev = (struct hci_ev_encrypt_change *) skb->data;
struct hci_conn *conn = NULL;
__u16 handle = __le16_to_cpu(ev->handle);
struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, handle);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
if (!ev->status) {
if (ev->encrypt)
@ -863,14 +966,13 @@ static inline void hci_encrypt_change_evt(struct hci_dev *hdev, struct sk_buff *
static inline void hci_change_conn_link_key_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_change_conn_link_key_complete *ev = (struct hci_ev_change_conn_link_key_complete *) skb->data;
struct hci_conn *conn = NULL;
__u16 handle = __le16_to_cpu(ev->handle);
struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, handle);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
if (!ev->status)
conn->link_mode |= HCI_LM_SECURE;
@ -898,18 +1000,35 @@ static inline void hci_link_key_notify_evt(struct hci_dev *hdev, struct sk_buff
{
}
/* Clock Offset */
static inline void hci_clock_offset_evt(struct hci_dev *hdev, struct sk_buff *skb)
/* Remote Features */
static inline void hci_remote_features_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_clock_offset *ev = (struct hci_ev_clock_offset *) skb->data;
struct hci_conn *conn = NULL;
__u16 handle = __le16_to_cpu(ev->handle);
struct hci_ev_remote_features *ev = (struct hci_ev_remote_features *) skb->data;
struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, handle);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn && !ev->status) {
memcpy(conn->features, ev->features, sizeof(conn->features));
}
hci_dev_unlock(hdev);
}
/* Clock Offset */
static inline void hci_clock_offset_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_clock_offset *ev = (struct hci_ev_clock_offset *) skb->data;
struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn && !ev->status) {
struct inquiry_entry *ie;
@ -940,6 +1059,23 @@ static inline void hci_pscan_rep_mode_evt(struct hci_dev *hdev, struct sk_buff *
hci_dev_unlock(hdev);
}
/* Sniff Subrate */
static inline void hci_sniff_subrate_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_sniff_subrate *ev = (struct hci_ev_sniff_subrate *) skb->data;
struct hci_conn *conn;
BT_DBG("%s status %d", hdev->name, ev->status);
hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle));
if (conn) {
}
hci_dev_unlock(hdev);
}
void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_event_hdr *hdr = (struct hci_event_hdr *) skb->data;
@ -988,6 +1124,10 @@ void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb)
hci_role_change_evt(hdev, skb);
break;
case HCI_EV_MODE_CHANGE:
hci_mode_change_evt(hdev, skb);
break;
case HCI_EV_AUTH_COMPLETE:
hci_auth_complete_evt(hdev, skb);
break;
@ -1012,6 +1152,10 @@ void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb)
hci_link_key_notify_evt(hdev, skb);
break;
case HCI_EV_REMOTE_FEATURES:
hci_remote_features_evt(hdev, skb);
break;
case HCI_EV_CLOCK_OFFSET:
hci_clock_offset_evt(hdev, skb);
break;
@ -1020,6 +1164,10 @@ void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb)
hci_pscan_rep_mode_evt(hdev, skb);
break;
case HCI_EV_SNIFF_SUBRATE:
hci_sniff_subrate_evt(hdev, skb);
break;
case HCI_EV_CMD_STATUS:
cs = (struct hci_ev_cmd_status *) skb->data;
skb_pull(skb, sizeof(cs));

View File

@ -3,6 +3,8 @@
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
@ -11,35 +13,35 @@
#define BT_DBG(D...)
#endif
static ssize_t show_name(struct class_device *cdev, char *buf)
static ssize_t show_name(struct device *dev, struct device_attribute *attr, char *buf)
{
struct hci_dev *hdev = class_get_devdata(cdev);
struct hci_dev *hdev = dev_get_drvdata(dev);
return sprintf(buf, "%s\n", hdev->name);
}
static ssize_t show_type(struct class_device *cdev, char *buf)
static ssize_t show_type(struct device *dev, struct device_attribute *attr, char *buf)
{
struct hci_dev *hdev = class_get_devdata(cdev);
struct hci_dev *hdev = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", hdev->type);
}
static ssize_t show_address(struct class_device *cdev, char *buf)
static ssize_t show_address(struct device *dev, struct device_attribute *attr, char *buf)
{
struct hci_dev *hdev = class_get_devdata(cdev);
struct hci_dev *hdev = dev_get_drvdata(dev);
bdaddr_t bdaddr;
baswap(&bdaddr, &hdev->bdaddr);
return sprintf(buf, "%s\n", batostr(&bdaddr));
}
static ssize_t show_flags(struct class_device *cdev, char *buf)
static ssize_t show_flags(struct device *dev, struct device_attribute *attr, char *buf)
{
struct hci_dev *hdev = class_get_devdata(cdev);
struct hci_dev *hdev = dev_get_drvdata(dev);
return sprintf(buf, "0x%lx\n", hdev->flags);
}
static ssize_t show_inquiry_cache(struct class_device *cdev, char *buf)
static ssize_t show_inquiry_cache(struct device *dev, struct device_attribute *attr, char *buf)
{
struct hci_dev *hdev = class_get_devdata(cdev);
struct hci_dev *hdev = dev_get_drvdata(dev);
struct inquiry_cache *cache = &hdev->inq_cache;
struct inquiry_entry *e;
int n = 0;
@ -61,94 +63,193 @@ static ssize_t show_inquiry_cache(struct class_device *cdev, char *buf)
return n;
}
static CLASS_DEVICE_ATTR(name, S_IRUGO, show_name, NULL);
static CLASS_DEVICE_ATTR(type, S_IRUGO, show_type, NULL);
static CLASS_DEVICE_ATTR(address, S_IRUGO, show_address, NULL);
static CLASS_DEVICE_ATTR(flags, S_IRUGO, show_flags, NULL);
static CLASS_DEVICE_ATTR(inquiry_cache, S_IRUGO, show_inquiry_cache, NULL);
static ssize_t show_idle_timeout(struct device *dev, struct device_attribute *attr, char *buf)
{
struct hci_dev *hdev = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", hdev->idle_timeout);
}
static struct class_device_attribute *bt_attrs[] = {
&class_device_attr_name,
&class_device_attr_type,
&class_device_attr_address,
&class_device_attr_flags,
&class_device_attr_inquiry_cache,
static ssize_t store_idle_timeout(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
{
struct hci_dev *hdev = dev_get_drvdata(dev);
char *ptr;
__u32 val;
val = simple_strtoul(buf, &ptr, 10);
if (ptr == buf)
return -EINVAL;
if (val != 0 && (val < 500 || val > 3600000))
return -EINVAL;
hdev->idle_timeout = val;
return count;
}
static ssize_t show_sniff_max_interval(struct device *dev, struct device_attribute *attr, char *buf)
{
struct hci_dev *hdev = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", hdev->sniff_max_interval);
}
static ssize_t store_sniff_max_interval(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
{
struct hci_dev *hdev = dev_get_drvdata(dev);
char *ptr;
__u16 val;
val = simple_strtoul(buf, &ptr, 10);
if (ptr == buf)
return -EINVAL;
if (val < 0x0002 || val > 0xFFFE || val % 2)
return -EINVAL;
if (val < hdev->sniff_min_interval)
return -EINVAL;
hdev->sniff_max_interval = val;
return count;
}
static ssize_t show_sniff_min_interval(struct device *dev, struct device_attribute *attr, char *buf)
{
struct hci_dev *hdev = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", hdev->sniff_min_interval);
}
static ssize_t store_sniff_min_interval(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
{
struct hci_dev *hdev = dev_get_drvdata(dev);
char *ptr;
__u16 val;
val = simple_strtoul(buf, &ptr, 10);
if (ptr == buf)
return -EINVAL;
if (val < 0x0002 || val > 0xFFFE || val % 2)
return -EINVAL;
if (val > hdev->sniff_max_interval)
return -EINVAL;
hdev->sniff_min_interval = val;
return count;
}
static DEVICE_ATTR(name, S_IRUGO, show_name, NULL);
static DEVICE_ATTR(type, S_IRUGO, show_type, NULL);
static DEVICE_ATTR(address, S_IRUGO, show_address, NULL);
static DEVICE_ATTR(flags, S_IRUGO, show_flags, NULL);
static DEVICE_ATTR(inquiry_cache, S_IRUGO, show_inquiry_cache, NULL);
static DEVICE_ATTR(idle_timeout, S_IRUGO | S_IWUSR,
show_idle_timeout, store_idle_timeout);
static DEVICE_ATTR(sniff_max_interval, S_IRUGO | S_IWUSR,
show_sniff_max_interval, store_sniff_max_interval);
static DEVICE_ATTR(sniff_min_interval, S_IRUGO | S_IWUSR,
show_sniff_min_interval, store_sniff_min_interval);
static struct device_attribute *bt_attrs[] = {
&dev_attr_name,
&dev_attr_type,
&dev_attr_address,
&dev_attr_flags,
&dev_attr_inquiry_cache,
&dev_attr_idle_timeout,
&dev_attr_sniff_max_interval,
&dev_attr_sniff_min_interval,
NULL
};
#ifdef CONFIG_HOTPLUG
static int bt_uevent(struct class_device *cdev, char **envp, int num_envp, char *buf, int size)
struct class *bt_class = NULL;
EXPORT_SYMBOL_GPL(bt_class);
static struct bus_type bt_bus = {
.name = "bluetooth",
};
static struct platform_device *bt_platform;
static void bt_release(struct device *dev)
{
struct hci_dev *hdev = class_get_devdata(cdev);
int n, i = 0;
envp[i++] = buf;
n = snprintf(buf, size, "INTERFACE=%s", hdev->name) + 1;
buf += n;
size -= n;
if ((size <= 0) || (i >= num_envp))
return -ENOMEM;
envp[i] = NULL;
return 0;
}
#endif
static void bt_release(struct class_device *cdev)
{
struct hci_dev *hdev = class_get_devdata(cdev);
struct hci_dev *hdev = dev_get_drvdata(dev);
kfree(hdev);
}
struct class bt_class = {
.name = "bluetooth",
.release = bt_release,
#ifdef CONFIG_HOTPLUG
.uevent = bt_uevent,
#endif
};
EXPORT_SYMBOL_GPL(bt_class);
int hci_register_sysfs(struct hci_dev *hdev)
{
struct class_device *cdev = &hdev->class_dev;
struct device *dev = &hdev->dev;
unsigned int i;
int err;
BT_DBG("%p name %s type %d", hdev, hdev->name, hdev->type);
cdev->class = &bt_class;
class_set_devdata(cdev, hdev);
dev->class = bt_class;
strlcpy(cdev->class_id, hdev->name, BUS_ID_SIZE);
err = class_device_register(cdev);
if (hdev->parent)
dev->parent = hdev->parent;
else
dev->parent = &bt_platform->dev;
strlcpy(dev->bus_id, hdev->name, BUS_ID_SIZE);
dev->release = bt_release;
dev_set_drvdata(dev, hdev);
err = device_register(dev);
if (err < 0)
return err;
for (i = 0; bt_attrs[i]; i++)
class_device_create_file(cdev, bt_attrs[i]);
device_create_file(dev, bt_attrs[i]);
return 0;
}
void hci_unregister_sysfs(struct hci_dev *hdev)
{
struct class_device * cdev = &hdev->class_dev;
struct device *dev = &hdev->dev;
BT_DBG("%p name %s type %d", hdev, hdev->name, hdev->type);
class_device_del(cdev);
device_del(dev);
}
int __init bt_sysfs_init(void)
{
return class_register(&bt_class);
int err;
bt_platform = platform_device_register_simple("bluetooth", -1, NULL, 0);
if (IS_ERR(bt_platform))
return PTR_ERR(bt_platform);
err = bus_register(&bt_bus);
if (err < 0) {
platform_device_unregister(bt_platform);
return err;
}
bt_class = class_create(THIS_MODULE, "bluetooth");
if (IS_ERR(bt_class)) {
bus_unregister(&bt_bus);
platform_device_unregister(bt_platform);
return PTR_ERR(bt_class);
}
return 0;
}
void __exit bt_sysfs_cleanup(void)
{
class_unregister(&bt_class);
class_destroy(bt_class);
bus_unregister(&bt_bus);
platform_device_unregister(bt_platform);
}

View File

@ -63,11 +63,6 @@ static struct bt_sock_list l2cap_sk_list = {
.lock = RW_LOCK_UNLOCKED
};
static int l2cap_conn_del(struct hci_conn *conn, int err);
static void __l2cap_chan_add(struct l2cap_conn *conn, struct sock *sk, struct sock *parent);
static void l2cap_chan_del(struct sock *sk, int err);
static void __l2cap_sock_close(struct sock *sk, int reason);
static void l2cap_sock_close(struct sock *sk);
static void l2cap_sock_kill(struct sock *sk);
@ -109,24 +104,177 @@ static void l2cap_sock_init_timer(struct sock *sk)
sk->sk_timer.data = (unsigned long)sk;
}
/* ---- L2CAP channels ---- */
static struct sock *__l2cap_get_chan_by_dcid(struct l2cap_chan_list *l, u16 cid)
{
struct sock *s;
for (s = l->head; s; s = l2cap_pi(s)->next_c) {
if (l2cap_pi(s)->dcid == cid)
break;
}
return s;
}
static struct sock *__l2cap_get_chan_by_scid(struct l2cap_chan_list *l, u16 cid)
{
struct sock *s;
for (s = l->head; s; s = l2cap_pi(s)->next_c) {
if (l2cap_pi(s)->scid == cid)
break;
}
return s;
}
/* Find channel with given SCID.
* Returns locked socket */
static inline struct sock *l2cap_get_chan_by_scid(struct l2cap_chan_list *l, u16 cid)
{
struct sock *s;
read_lock(&l->lock);
s = __l2cap_get_chan_by_scid(l, cid);
if (s) bh_lock_sock(s);
read_unlock(&l->lock);
return s;
}
static struct sock *__l2cap_get_chan_by_ident(struct l2cap_chan_list *l, u8 ident)
{
struct sock *s;
for (s = l->head; s; s = l2cap_pi(s)->next_c) {
if (l2cap_pi(s)->ident == ident)
break;
}
return s;
}
static inline struct sock *l2cap_get_chan_by_ident(struct l2cap_chan_list *l, u8 ident)
{
struct sock *s;
read_lock(&l->lock);
s = __l2cap_get_chan_by_ident(l, ident);
if (s) bh_lock_sock(s);
read_unlock(&l->lock);
return s;
}
static u16 l2cap_alloc_cid(struct l2cap_chan_list *l)
{
u16 cid = 0x0040;
for (; cid < 0xffff; cid++) {
if(!__l2cap_get_chan_by_scid(l, cid))
return cid;
}
return 0;
}
static inline void __l2cap_chan_link(struct l2cap_chan_list *l, struct sock *sk)
{
sock_hold(sk);
if (l->head)
l2cap_pi(l->head)->prev_c = sk;
l2cap_pi(sk)->next_c = l->head;
l2cap_pi(sk)->prev_c = NULL;
l->head = sk;
}
static inline void l2cap_chan_unlink(struct l2cap_chan_list *l, struct sock *sk)
{
struct sock *next = l2cap_pi(sk)->next_c, *prev = l2cap_pi(sk)->prev_c;
write_lock(&l->lock);
if (sk == l->head)
l->head = next;
if (next)
l2cap_pi(next)->prev_c = prev;
if (prev)
l2cap_pi(prev)->next_c = next;
write_unlock(&l->lock);
__sock_put(sk);
}
static void __l2cap_chan_add(struct l2cap_conn *conn, struct sock *sk, struct sock *parent)
{
struct l2cap_chan_list *l = &conn->chan_list;
BT_DBG("conn %p, psm 0x%2.2x, dcid 0x%4.4x", conn, l2cap_pi(sk)->psm, l2cap_pi(sk)->dcid);
l2cap_pi(sk)->conn = conn;
if (sk->sk_type == SOCK_SEQPACKET) {
/* Alloc CID for connection-oriented socket */
l2cap_pi(sk)->scid = l2cap_alloc_cid(l);
} else if (sk->sk_type == SOCK_DGRAM) {
/* Connectionless socket */
l2cap_pi(sk)->scid = 0x0002;
l2cap_pi(sk)->dcid = 0x0002;
l2cap_pi(sk)->omtu = L2CAP_DEFAULT_MTU;
} else {
/* Raw socket can send/recv signalling messages only */
l2cap_pi(sk)->scid = 0x0001;
l2cap_pi(sk)->dcid = 0x0001;
l2cap_pi(sk)->omtu = L2CAP_DEFAULT_MTU;
}
__l2cap_chan_link(l, sk);
if (parent)
bt_accept_enqueue(parent, sk);
}
/* Delete channel.
* Must be called on the locked socket. */
static void l2cap_chan_del(struct sock *sk, int err)
{
struct l2cap_conn *conn = l2cap_pi(sk)->conn;
struct sock *parent = bt_sk(sk)->parent;
l2cap_sock_clear_timer(sk);
BT_DBG("sk %p, conn %p, err %d", sk, conn, err);
if (conn) {
/* Unlink from channel list */
l2cap_chan_unlink(&conn->chan_list, sk);
l2cap_pi(sk)->conn = NULL;
hci_conn_put(conn->hcon);
}
sk->sk_state = BT_CLOSED;
sock_set_flag(sk, SOCK_ZAPPED);
if (err)
sk->sk_err = err;
if (parent) {
bt_accept_unlink(sk);
parent->sk_data_ready(parent, 0);
} else
sk->sk_state_change(sk);
}
/* ---- L2CAP connections ---- */
static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon, u8 status)
{
struct l2cap_conn *conn;
struct l2cap_conn *conn = hcon->l2cap_data;
if ((conn = hcon->l2cap_data))
if (conn || status)
return conn;
if (status)
return conn;
if (!(conn = kmalloc(sizeof(struct l2cap_conn), GFP_ATOMIC)))
conn = kzalloc(sizeof(struct l2cap_conn), GFP_ATOMIC);
if (!conn)
return NULL;
memset(conn, 0, sizeof(struct l2cap_conn));
hcon->l2cap_data = conn;
conn->hcon = hcon;
BT_DBG("hcon %p conn %p", hcon, conn);
conn->mtu = hcon->hdev->acl_mtu;
conn->src = &hcon->hdev->bdaddr;
conn->dst = &hcon->dst;
@ -134,17 +282,16 @@ static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon, u8 status)
spin_lock_init(&conn->lock);
rwlock_init(&conn->chan_list.lock);
BT_DBG("hcon %p conn %p", hcon, conn);
return conn;
}
static int l2cap_conn_del(struct hci_conn *hcon, int err)
static void l2cap_conn_del(struct hci_conn *hcon, int err)
{
struct l2cap_conn *conn;
struct l2cap_conn *conn = hcon->l2cap_data;
struct sock *sk;
if (!(conn = hcon->l2cap_data))
return 0;
if (!conn)
return;
BT_DBG("hcon %p conn %p, err %d", hcon, conn, err);
@ -161,7 +308,6 @@ static int l2cap_conn_del(struct hci_conn *hcon, int err)
hcon->l2cap_data = NULL;
kfree(conn);
return 0;
}
static inline void l2cap_chan_add(struct l2cap_conn *conn, struct sock *sk, struct sock *parent)
@ -925,160 +1071,6 @@ static int l2cap_sock_release(struct socket *sock)
return err;
}
/* ---- L2CAP channels ---- */
static struct sock *__l2cap_get_chan_by_dcid(struct l2cap_chan_list *l, u16 cid)
{
struct sock *s;
for (s = l->head; s; s = l2cap_pi(s)->next_c) {
if (l2cap_pi(s)->dcid == cid)
break;
}
return s;
}
static struct sock *__l2cap_get_chan_by_scid(struct l2cap_chan_list *l, u16 cid)
{
struct sock *s;
for (s = l->head; s; s = l2cap_pi(s)->next_c) {
if (l2cap_pi(s)->scid == cid)
break;
}
return s;
}
/* Find channel with given SCID.
* Returns locked socket */
static inline struct sock *l2cap_get_chan_by_scid(struct l2cap_chan_list *l, u16 cid)
{
struct sock *s;
read_lock(&l->lock);
s = __l2cap_get_chan_by_scid(l, cid);
if (s) bh_lock_sock(s);
read_unlock(&l->lock);
return s;
}
static struct sock *__l2cap_get_chan_by_ident(struct l2cap_chan_list *l, u8 ident)
{
struct sock *s;
for (s = l->head; s; s = l2cap_pi(s)->next_c) {
if (l2cap_pi(s)->ident == ident)
break;
}
return s;
}
static inline struct sock *l2cap_get_chan_by_ident(struct l2cap_chan_list *l, u8 ident)
{
struct sock *s;
read_lock(&l->lock);
s = __l2cap_get_chan_by_ident(l, ident);
if (s) bh_lock_sock(s);
read_unlock(&l->lock);
return s;
}
static u16 l2cap_alloc_cid(struct l2cap_chan_list *l)
{
u16 cid = 0x0040;
for (; cid < 0xffff; cid++) {
if(!__l2cap_get_chan_by_scid(l, cid))
return cid;
}
return 0;
}
static inline void __l2cap_chan_link(struct l2cap_chan_list *l, struct sock *sk)
{
sock_hold(sk);
if (l->head)
l2cap_pi(l->head)->prev_c = sk;
l2cap_pi(sk)->next_c = l->head;
l2cap_pi(sk)->prev_c = NULL;
l->head = sk;
}
static inline void l2cap_chan_unlink(struct l2cap_chan_list *l, struct sock *sk)
{
struct sock *next = l2cap_pi(sk)->next_c, *prev = l2cap_pi(sk)->prev_c;
write_lock(&l->lock);
if (sk == l->head)
l->head = next;
if (next)
l2cap_pi(next)->prev_c = prev;
if (prev)
l2cap_pi(prev)->next_c = next;
write_unlock(&l->lock);
__sock_put(sk);
}
static void __l2cap_chan_add(struct l2cap_conn *conn, struct sock *sk, struct sock *parent)
{
struct l2cap_chan_list *l = &conn->chan_list;
BT_DBG("conn %p, psm 0x%2.2x, dcid 0x%4.4x", conn, l2cap_pi(sk)->psm, l2cap_pi(sk)->dcid);
l2cap_pi(sk)->conn = conn;
if (sk->sk_type == SOCK_SEQPACKET) {
/* Alloc CID for connection-oriented socket */
l2cap_pi(sk)->scid = l2cap_alloc_cid(l);
} else if (sk->sk_type == SOCK_DGRAM) {
/* Connectionless socket */
l2cap_pi(sk)->scid = 0x0002;
l2cap_pi(sk)->dcid = 0x0002;
l2cap_pi(sk)->omtu = L2CAP_DEFAULT_MTU;
} else {
/* Raw socket can send/recv signalling messages only */
l2cap_pi(sk)->scid = 0x0001;
l2cap_pi(sk)->dcid = 0x0001;
l2cap_pi(sk)->omtu = L2CAP_DEFAULT_MTU;
}
__l2cap_chan_link(l, sk);
if (parent)
bt_accept_enqueue(parent, sk);
}
/* Delete channel.
* Must be called on the locked socket. */
static void l2cap_chan_del(struct sock *sk, int err)
{
struct l2cap_conn *conn = l2cap_pi(sk)->conn;
struct sock *parent = bt_sk(sk)->parent;
l2cap_sock_clear_timer(sk);
BT_DBG("sk %p, conn %p, err %d", sk, conn, err);
if (conn) {
/* Unlink from channel list */
l2cap_chan_unlink(&conn->chan_list, sk);
l2cap_pi(sk)->conn = NULL;
hci_conn_put(conn->hcon);
}
sk->sk_state = BT_CLOSED;
sock_set_flag(sk, SOCK_ZAPPED);
if (err)
sk->sk_err = err;
if (parent) {
bt_accept_unlink(sk);
parent->sk_data_ready(parent, 0);
} else
sk->sk_state_change(sk);
}
static void l2cap_conn_ready(struct l2cap_conn *conn)
{
struct l2cap_chan_list *l = &conn->chan_list;
@ -1834,7 +1826,9 @@ drop:
kfree_skb(skb);
done:
if (sk) bh_unlock_sock(sk);
if (sk)
bh_unlock_sock(sk);
return 0;
}
@ -1925,18 +1919,18 @@ static int l2cap_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 type)
static int l2cap_connect_cfm(struct hci_conn *hcon, u8 status)
{
struct l2cap_conn *conn;
BT_DBG("hcon %p bdaddr %s status %d", hcon, batostr(&hcon->dst), status);
if (hcon->type != ACL_LINK)
return 0;
if (!status) {
struct l2cap_conn *conn;
conn = l2cap_conn_add(hcon, status);
if (conn)
l2cap_conn_ready(conn);
} else
} else
l2cap_conn_del(hcon, bt_err(status));
return 0;
@ -1950,19 +1944,21 @@ static int l2cap_disconn_ind(struct hci_conn *hcon, u8 reason)
return 0;
l2cap_conn_del(hcon, bt_err(reason));
return 0;
}
static int l2cap_auth_cfm(struct hci_conn *hcon, u8 status)
{
struct l2cap_chan_list *l;
struct l2cap_conn *conn;
struct l2cap_conn *conn = conn = hcon->l2cap_data;
struct l2cap_conn_rsp rsp;
struct sock *sk;
int result;
if (!(conn = hcon->l2cap_data))
if (!conn)
return 0;
l = &conn->chan_list;
BT_DBG("conn %p", conn);
@ -2005,13 +2001,14 @@ static int l2cap_auth_cfm(struct hci_conn *hcon, u8 status)
static int l2cap_encrypt_cfm(struct hci_conn *hcon, u8 status)
{
struct l2cap_chan_list *l;
struct l2cap_conn *conn;
struct l2cap_conn *conn = hcon->l2cap_data;
struct l2cap_conn_rsp rsp;
struct sock *sk;
int result;
if (!(conn = hcon->l2cap_data))
if (!conn)
return 0;
l = &conn->chan_list;
BT_DBG("conn %p", conn);
@ -2219,7 +2216,7 @@ static int __init l2cap_init(void)
goto error;
}
class_create_file(&bt_class, &class_attr_l2cap);
class_create_file(bt_class, &class_attr_l2cap);
BT_INFO("L2CAP ver %s", VERSION);
BT_INFO("L2CAP socket layer initialized");
@ -2233,7 +2230,7 @@ error:
static void __exit l2cap_exit(void)
{
class_remove_file(&bt_class, &class_attr_l2cap);
class_remove_file(bt_class, &class_attr_l2cap);
if (bt_sock_unregister(BTPROTO_L2CAP) < 0)
BT_ERR("L2CAP socket unregistration failed");

View File

@ -52,8 +52,9 @@
#define BT_DBG(D...)
#endif
#define VERSION "1.7"
#define VERSION "1.8"
static int disable_cfc = 0;
static unsigned int l2cap_mtu = RFCOMM_MAX_L2CAP_MTU;
static struct task_struct *rfcomm_thread;
@ -533,7 +534,7 @@ static struct rfcomm_session *rfcomm_session_add(struct socket *sock, int state)
s->sock = sock;
s->mtu = RFCOMM_DEFAULT_MTU;
s->cfc = RFCOMM_CFC_UNKNOWN;
s->cfc = disable_cfc ? RFCOMM_CFC_DISABLED : RFCOMM_CFC_UNKNOWN;
/* Do not increment module usage count for listening sessions.
* Otherwise we won't be able to unload the module. */
@ -1149,6 +1150,8 @@ static inline int rfcomm_check_link_mode(struct rfcomm_dlc *d)
static void rfcomm_dlc_accept(struct rfcomm_dlc *d)
{
struct sock *sk = d->session->sock->sk;
BT_DBG("dlc %p", d);
rfcomm_send_ua(d->session, d->dlci);
@ -1158,6 +1161,9 @@ static void rfcomm_dlc_accept(struct rfcomm_dlc *d)
d->state_change(d, 0);
rfcomm_dlc_unlock(d);
if (d->link_mode & RFCOMM_LM_MASTER)
hci_conn_switch_role(l2cap_pi(sk)->conn->hcon, 0x00);
rfcomm_send_msc(d->session, 1, d->dlci, d->v24_sig);
}
@ -1222,14 +1228,18 @@ static int rfcomm_apply_pn(struct rfcomm_dlc *d, int cr, struct rfcomm_pn *pn)
BT_DBG("dlc %p state %ld dlci %d mtu %d fc 0x%x credits %d",
d, d->state, d->dlci, pn->mtu, pn->flow_ctrl, pn->credits);
if (pn->flow_ctrl == 0xf0 || pn->flow_ctrl == 0xe0) {
d->cfc = s->cfc = RFCOMM_CFC_ENABLED;
if ((pn->flow_ctrl == 0xf0 && s->cfc != RFCOMM_CFC_DISABLED) ||
pn->flow_ctrl == 0xe0) {
d->cfc = RFCOMM_CFC_ENABLED;
d->tx_credits = pn->credits;
} else {
d->cfc = s->cfc = RFCOMM_CFC_DISABLED;
d->cfc = RFCOMM_CFC_DISABLED;
set_bit(RFCOMM_TX_THROTTLED, &d->flags);
}
if (s->cfc == RFCOMM_CFC_UNKNOWN)
s->cfc = d->cfc;
d->priority = pn->priority;
d->mtu = s->mtu = btohs(pn->mtu);
@ -2035,7 +2045,7 @@ static int __init rfcomm_init(void)
kernel_thread(rfcomm_run, NULL, CLONE_KERNEL);
class_create_file(&bt_class, &class_attr_rfcomm_dlc);
class_create_file(bt_class, &class_attr_rfcomm_dlc);
rfcomm_init_sockets();
@ -2050,7 +2060,7 @@ static int __init rfcomm_init(void)
static void __exit rfcomm_exit(void)
{
class_remove_file(&bt_class, &class_attr_rfcomm_dlc);
class_remove_file(bt_class, &class_attr_rfcomm_dlc);
hci_unregister_cb(&rfcomm_cb);
@ -2073,6 +2083,9 @@ static void __exit rfcomm_exit(void)
module_init(rfcomm_init);
module_exit(rfcomm_exit);
module_param(disable_cfc, bool, 0644);
MODULE_PARM_DESC(disable_cfc, "Disable credit based flow control");
module_param(l2cap_mtu, uint, 0644);
MODULE_PARM_DESC(l2cap_mtu, "Default MTU for the L2CAP connection");

View File

@ -944,7 +944,7 @@ int __init rfcomm_init_sockets(void)
if (err < 0)
goto error;
class_create_file(&bt_class, &class_attr_rfcomm);
class_create_file(bt_class, &class_attr_rfcomm);
BT_INFO("RFCOMM socket layer initialized");
@ -958,7 +958,7 @@ error:
void __exit rfcomm_cleanup_sockets(void)
{
class_remove_file(&bt_class, &class_attr_rfcomm);
class_remove_file(bt_class, &class_attr_rfcomm);
if (bt_sock_unregister(BTPROTO_RFCOMM) < 0)
BT_ERR("RFCOMM socket layer unregistration failed");

View File

@ -969,7 +969,7 @@ static int __init sco_init(void)
goto error;
}
class_create_file(&bt_class, &class_attr_sco);
class_create_file(bt_class, &class_attr_sco);
BT_INFO("SCO (Voice Link) ver %s", VERSION);
BT_INFO("SCO socket layer initialized");
@ -983,7 +983,7 @@ error:
static void __exit sco_exit(void)
{
class_remove_file(&bt_class, &class_attr_sco);
class_remove_file(bt_class, &class_attr_sco);
if (bt_sock_unregister(BTPROTO_SCO) < 0)
BT_ERR("SCO socket unregistration failed");

View File

@ -117,12 +117,13 @@ static int br_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb)
continue;
if (idx < s_idx)
continue;
goto cont;
err = br_fill_ifinfo(skb, p, NETLINK_CB(cb->skb).pid,
cb->nlh->nlmsg_seq, RTM_NEWLINK, NLM_F_MULTI);
if (err <= 0)
break;
cont:
++idx;
}
read_unlock(&dev_base_lock);

View File

@ -1106,7 +1106,15 @@ static struct sk_buff *inet_gso_segment(struct sk_buff *skb, int features)
int ihl;
int id;
if (!pskb_may_pull(skb, sizeof(*iph)))
if (unlikely(skb_shinfo(skb)->gso_type &
~(SKB_GSO_TCPV4 |
SKB_GSO_UDP |
SKB_GSO_DODGY |
SKB_GSO_TCP_ECN |
0)))
goto out;
if (unlikely(!pskb_may_pull(skb, sizeof(*iph))))
goto out;
iph = skb->nh.iph;
@ -1114,7 +1122,7 @@ static struct sk_buff *inet_gso_segment(struct sk_buff *skb, int features)
if (ihl < sizeof(*iph))
goto out;
if (!pskb_may_pull(skb, ihl))
if (unlikely(!pskb_may_pull(skb, ihl)))
goto out;
skb->h.raw = __skb_pull(skb, ihl);
@ -1125,7 +1133,7 @@ static struct sk_buff *inet_gso_segment(struct sk_buff *skb, int features)
rcu_read_lock();
ops = rcu_dereference(inet_protos[proto]);
if (ops && ops->gso_segment)
if (likely(ops && ops->gso_segment))
segs = ops->gso_segment(skb, features);
rcu_read_unlock();

View File

@ -2170,8 +2170,19 @@ struct sk_buff *tcp_tso_segment(struct sk_buff *skb, int features)
if (skb_gso_ok(skb, features | NETIF_F_GSO_ROBUST)) {
/* Packet is from an untrusted source, reset gso_segs. */
int mss = skb_shinfo(skb)->gso_size;
int type = skb_shinfo(skb)->gso_type;
int mss;
if (unlikely(type &
~(SKB_GSO_TCPV4 |
SKB_GSO_DODGY |
SKB_GSO_TCP_ECN |
SKB_GSO_TCPV6 |
0) ||
!(type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))))
goto out;
mss = skb_shinfo(skb)->gso_size;
skb_shinfo(skb)->gso_segs = (skb->len + mss - 1) / mss;
segs = NULL;

View File

@ -64,6 +64,14 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb, int features)
struct inet6_protocol *ops;
int proto;
if (unlikely(skb_shinfo(skb)->gso_type &
~(SKB_GSO_UDP |
SKB_GSO_DODGY |
SKB_GSO_TCP_ECN |
SKB_GSO_TCPV6 |
0)))
goto out;
if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h))))
goto out;
@ -111,7 +119,8 @@ unlock:
for (skb = segs; skb; skb = skb->next) {
ipv6h = skb->nh.ipv6h;
ipv6h->payload_len = htons(skb->len - skb->mac_len);
ipv6h->payload_len = htons(skb->len - skb->mac_len -
sizeof(*ipv6h));
}
out:

View File

@ -25,6 +25,7 @@
#include <linux/vmalloc.h>
#include <linux/netdevice.h>
#include <linux/module.h>
#include <linux/poison.h>
#include <linux/icmpv6.h>
#include <net/ipv6.h>
#include <asm/uaccess.h>
@ -376,7 +377,7 @@ ip6t_do_table(struct sk_buff **pskb,
} while (!hotdrop);
#ifdef CONFIG_NETFILTER_DEBUG
((struct ip6t_entry *)table_base)->comefrom = 0xdead57ac;
((struct ip6t_entry *)table_base)->comefrom = NETFILTER_LINK_POISON;
#endif
read_unlock_bh(&table->lock);

View File

@ -800,7 +800,7 @@ static int nr_accept(struct socket *sock, struct socket *newsock, int flags)
/* Now attach up the new socket */
kfree_skb(skb);
sk->sk_ack_backlog--;
sk_acceptq_removed(sk);
newsock->sk = newsk;
out:
@ -985,7 +985,7 @@ int nr_rx_frame(struct sk_buff *skb, struct net_device *dev)
nr_make->vr = 0;
nr_make->vl = 0;
nr_make->state = NR_STATE_3;
sk->sk_ack_backlog++;
sk_acceptq_added(sk);
nr_insert_socket(make);

View File

@ -752,7 +752,7 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
rose_insert_socket(sk); /* Finish the bind */
}
rose_try_next_neigh:
rose->dest_addr = addr->srose_addr;
rose->dest_call = addr->srose_call;
rose->rand = ((long)rose & 0xFFFF) + rose->lci;
@ -810,6 +810,11 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
}
if (sk->sk_state != TCP_ESTABLISHED) {
/* Try next neighbour */
rose->neighbour = rose_get_neigh(&addr->srose_addr, &cause, &diagnostic);
if (rose->neighbour)
goto rose_try_next_neigh;
/* No more neighbour */
sock->state = SS_UNCONNECTED;
return sock_error(sk); /* Always set at this point */
}

View File

@ -59,6 +59,7 @@ static int rose_rebuild_header(struct sk_buff *skb)
struct net_device_stats *stats = netdev_priv(dev);
unsigned char *bp = (unsigned char *)skb->data;
struct sk_buff *skbn;
unsigned int len;
#ifdef CONFIG_INET
if (arp_find(bp + 7, skb)) {
@ -75,6 +76,8 @@ static int rose_rebuild_header(struct sk_buff *skb)
kfree_skb(skb);
len = skbn->len;
if (!rose_route_frame(skbn, NULL)) {
kfree_skb(skbn);
stats->tx_errors++;
@ -82,7 +85,7 @@ static int rose_rebuild_header(struct sk_buff *skb)
}
stats->tx_packets++;
stats->tx_bytes += skbn->len;
stats->tx_bytes += len;
#endif
return 1;
}

View File

@ -297,7 +297,10 @@ static inline struct tipc_msg *buf_msg(struct sk_buff *skb)
* buf_acquire - creates a TIPC message buffer
* @size: message size (including TIPC header)
*
* Returns a new buffer. Space is reserved for a data link header.
* Returns a new buffer with data pointers set to the specified size.
*
* NOTE: Headroom is reserved to allow prepending of a data link header.
* There may also be unrequested tailroom present at the buffer's end.
*/
static inline struct sk_buff *buf_acquire(u32 size)

View File

@ -998,6 +998,8 @@ static int link_bundle_buf(struct link *l_ptr,
return 0;
if (skb_tailroom(bundler) < (pad + size))
return 0;
if (link_max_pkt(l_ptr) < (to_pos + size))
return 0;
skb_put(bundler, pad + size);
memcpy(bundler->data + to_pos, buf->data, size);

View File

@ -144,7 +144,7 @@ static inline void unix_set_secdata(struct scm_cookie *scm, struct sk_buff *skb)
scm->seclen = *UNIXSECLEN(skb);
}
#else
static void unix_get_peersec_dgram(struct sk_buff *skb)
static inline void unix_get_peersec_dgram(struct sk_buff *skb)
{ }
static inline void unix_set_secdata(struct scm_cookie *scm, struct sk_buff *skb)