Merge branch 'for_3.0/pm-fixes' of ssh://master.kernel.org/pub/scm/linux/kernel/git/khilman/linux-omap-pm into fixes

This commit is contained in:
Tony Lindgren 2011-06-13 07:40:25 -07:00
commit c8e0bf95fc
529 changed files with 15334 additions and 6569 deletions

View file

@ -99,18 +99,11 @@ o "qp" indicates that RCU still expects a quiescent state from
o "dt" is the current value of the dyntick counter that is incremented
when entering or leaving dynticks idle state, either by the
scheduler or by irq. The number after the "/" is the interrupt
nesting depth when in dyntick-idle state, or one greater than
the interrupt-nesting depth otherwise.
This field is displayed only for CONFIG_NO_HZ kernels.
o "dn" is the current value of the dyntick counter that is incremented
when entering or leaving dynticks idle state via NMI. If both
the "dt" and "dn" values are even, then this CPU is in dynticks
idle mode and may be ignored by RCU. If either of these two
counters is odd, then RCU must be alert to the possibility of
an RCU read-side critical section running on this CPU.
scheduler or by irq. This number is even if the CPU is in
dyntick idle mode and odd otherwise. The number after the first
"/" is the interrupt nesting depth when in dyntick-idle state,
or one greater than the interrupt-nesting depth otherwise.
The number after the second "/" is the NMI nesting depth.
This field is displayed only for CONFIG_NO_HZ kernels.

View file

@ -66,3 +66,8 @@ Note: We can use a kernel with multiple custom ACPI method running,
But each individual write to debugfs can implement a SINGLE
method override. i.e. if we want to insert/override multiple
ACPI methods, we need to redo step c) ~ g) for multiple times.
Note: Be aware that root can mis-use this driver to modify arbitrary
memory and gain additional rights, if root's privileges got
restricted (for example if root is not allowed to load additional
modules after boot).

View file

@ -1 +1,96 @@
See Documentation/crypto/async-tx-api.txt
DMA Engine API Guide
====================
Vinod Koul <vinod dot koul at intel.com>
NOTE: For DMA Engine usage in async_tx please see:
Documentation/crypto/async-tx-api.txt
Below is a guide to device driver writers on how to use the Slave-DMA API of the
DMA Engine. This is applicable only for slave DMA usage only.
The slave DMA usage consists of following steps
1. Allocate a DMA slave channel
2. Set slave and controller specific parameters
3. Get a descriptor for transaction
4. Submit the transaction and wait for callback notification
1. Allocate a DMA slave channel
Channel allocation is slightly different in the slave DMA context, client
drivers typically need a channel from a particular DMA controller only and even
in some cases a specific channel is desired. To request a channel
dma_request_channel() API is used.
Interface:
struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
dma_filter_fn filter_fn,
void *filter_param);
where dma_filter_fn is defined as:
typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
When the optional 'filter_fn' parameter is set to NULL dma_request_channel
simply returns the first channel that satisfies the capability mask. Otherwise,
when the mask parameter is insufficient for specifying the necessary channel,
the filter_fn routine can be used to disposition the available channels in the
system. The filter_fn routine is called once for each free channel in the
system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags
that channel to be the return value from dma_request_channel. A channel
allocated via this interface is exclusive to the caller, until
dma_release_channel() is called.
2. Set slave and controller specific parameters
Next step is always to pass some specific information to the DMA driver. Most of
the generic information which a slave DMA can use is in struct dma_slave_config.
It allows the clients to specify DMA direction, DMA addresses, bus widths, DMA
burst lengths etc. If some DMA controllers have more parameters to be sent then
they should try to embed struct dma_slave_config in their controller specific
structure. That gives flexibility to client to pass more parameters, if
required.
Interface:
int dmaengine_slave_config(struct dma_chan *chan,
struct dma_slave_config *config)
3. Get a descriptor for transaction
For slave usage the various modes of slave transfers supported by the
DMA-engine are:
slave_sg - DMA a list of scatter gather buffers from/to a peripheral
dma_cyclic - Perform a cyclic DMA operation from/to a peripheral till the
operation is explicitly stopped.
The non NULL return of this transfer API represents a "descriptor" for the given
transaction.
Interface:
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_sg)(
struct dma_chan *chan,
struct scatterlist *dst_sg, unsigned int dst_nents,
struct scatterlist *src_sg, unsigned int src_nents,
unsigned long flags);
struct dma_async_tx_descriptor *(*chan->device->device_prep_dma_cyclic)(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_data_direction direction);
4. Submit the transaction and wait for callback notification
To schedule the transaction to be scheduled by dma device, the "descriptor"
returned in above (3) needs to be submitted.
To tell the dma driver that a transaction is ready to be serviced, the
descriptor->submit() callback needs to be invoked. This chains the descriptor to
the pending queue.
The transactions in the pending queue can be activated by calling the
issue_pending API. If channel is idle then the first transaction in queue is
started and subsequent ones queued up.
On completion of the DMA operation the next in queue is submitted and a tasklet
triggered. The tasklet would then call the client driver completion callback
routine for notification, if set.
Interface:
void dma_async_issue_pending(struct dma_chan *chan);
==============================================================================
Additional usage notes for dma driver writers
1/ Although DMA engine specifies that completion callback routines cannot submit
any new operations, but typically for slave DMA subsequent transaction may not
be available for submit prior to callback routine being called. This requirement
is not a requirement for DMA-slave devices. But they should take care to drop
the spin-lock they might be holding before calling the callback routine

View file

@ -6,6 +6,42 @@ be removed from this file.
---------------------------
What: x86 floppy disable_hlt
When: 2012
Why: ancient workaround of dubious utility clutters the
code used by everybody else.
Who: Len Brown <len.brown@intel.com>
---------------------------
What: CONFIG_APM_CPU_IDLE, and its ability to call APM BIOS in idle
When: 2012
Why: This optional sub-feature of APM is of dubious reliability,
and ancient APM laptops are likely better served by calling HLT.
Deleting CONFIG_APM_CPU_IDLE allows x86 to stop exporting
the pm_idle function pointer to modules.
Who: Len Brown <len.brown@intel.com>
----------------------------
What: x86_32 "no-hlt" cmdline param
When: 2012
Why: remove a branch from idle path, simplify code used by everybody.
This option disabled the use of HLT in idle and machine_halt()
for hardware that was flakey 15-years ago. Today we have
"idle=poll" that removed HLT from idle, and so if such a machine
is still running the upstream kernel, "idle=poll" is likely sufficient.
Who: Len Brown <len.brown@intel.com>
----------------------------
What: x86 "idle=mwait" cmdline param
When: 2012
Why: simplify x86 idle code
Who: Len Brown <len.brown@intel.com>
----------------------------
What: PRISM54
When: 2.6.34

View file

@ -104,7 +104,7 @@ of the locking scheme for directory operations.
prototypes:
struct inode *(*alloc_inode)(struct super_block *sb);
void (*destroy_inode)(struct inode *);
void (*dirty_inode) (struct inode *);
void (*dirty_inode) (struct inode *, int flags);
int (*write_inode) (struct inode *, struct writeback_control *wbc);
int (*drop_inode) (struct inode *);
void (*evict_inode) (struct inode *);
@ -126,7 +126,7 @@ locking rules:
s_umount
alloc_inode:
destroy_inode:
dirty_inode: (must not sleep)
dirty_inode:
write_inode:
drop_inode: !!!inode->i_lock!!!
evict_inode:

View file

@ -211,7 +211,7 @@ struct super_operations {
struct inode *(*alloc_inode)(struct super_block *sb);
void (*destroy_inode)(struct inode *);
void (*dirty_inode) (struct inode *);
void (*dirty_inode) (struct inode *, int flags);
int (*write_inode) (struct inode *, int);
void (*drop_inode) (struct inode *);
void (*delete_inode) (struct inode *);

View file

@ -999,7 +999,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
With this option on every unmap_single operation will
result in a hardware IOTLB flush operation as opposed
to batching them for performance.
sp_off [Default Off]
By default, super page will be supported if Intel IOMMU
has the capability. With this option, super page will
not be supported.
intremap= [X86-64, Intel-IOMMU]
Format: { on (default) | off | nosid }
on enable Interrupt Remapping (default)

View file

@ -1,184 +0,0 @@
Acer Laptop WMI Extras Driver
http://code.google.com/p/aceracpi
Version 0.3
4th April 2009
Copyright 2007-2009 Carlos Corbacho <carlos@strangeworlds.co.uk>
acer-wmi is a driver to allow you to control various parts of your Acer laptop
hardware under Linux which are exposed via ACPI-WMI.
This driver completely replaces the old out-of-tree acer_acpi, which I am
currently maintaining for bug fixes only on pre-2.6.25 kernels. All development
work is now focused solely on acer-wmi.
Disclaimer
**********
Acer and Wistron have provided nothing towards the development acer_acpi or
acer-wmi. All information we have has been through the efforts of the developers
and the users to discover as much as possible about the hardware.
As such, I do warn that this could break your hardware - this is extremely
unlikely of course, but please bear this in mind.
Background
**********
acer-wmi is derived from acer_acpi, originally developed by Mark
Smith in 2005, then taken over by Carlos Corbacho in 2007, in order to activate
the wireless LAN card under a 64-bit version of Linux, as acerhk[1] (the
previous solution to the problem) relied on making 32 bit BIOS calls which are
not possible in kernel space from a 64 bit OS.
[1] acerhk: http://www.cakey.de/acerhk/
Supported Hardware
******************
NOTE: The Acer Aspire One is not supported hardware. It cannot work with
acer-wmi until Acer fix their ACPI-WMI implementation on them, so has been
blacklisted until that happens.
Please see the website for the current list of known working hardware:
http://code.google.com/p/aceracpi/wiki/SupportedHardware
If your laptop is not listed, or listed as unknown, and works with acer-wmi,
please contact me with a copy of the DSDT.
If your Acer laptop doesn't work with acer-wmi, I would also like to see the
DSDT.
To send me the DSDT, as root/sudo:
cat /sys/firmware/acpi/tables/DSDT > dsdt
And send me the resulting 'dsdt' file.
Usage
*****
On Acer laptops, acer-wmi should already be autoloaded based on DMI matching.
For non-Acer laptops, until WMI based autoloading support is added, you will
need to manually load acer-wmi.
acer-wmi creates /sys/devices/platform/acer-wmi, and fills it with various
files whose usage is detailed below, which enables you to control some of the
following (varies between models):
* the wireless LAN card radio
* inbuilt Bluetooth adapter
* inbuilt 3G card
* mail LED of your laptop
* brightness of the LCD panel
Wireless
********
With regards to wireless, all acer-wmi does is enable the radio on the card. It
is not responsible for the wireless LED - once the radio is enabled, this is
down to the wireless driver for your card. So the behaviour of the wireless LED,
once you enable the radio, will depend on your hardware and driver combination.
e.g. With the BCM4318 on the Acer Aspire 5020 series:
ndiswrapper: Light blinks on when transmitting
b43: Solid light, blinks off when transmitting
Wireless radio control is unconditionally enabled - all Acer laptops that support
acer-wmi come with built-in wireless. However, should you feel so inclined to
ever wish to remove the card, or swap it out at some point, please get in touch
with me, as we may well be able to gain some data on wireless card detection.
The wireless radio is exposed through rfkill.
Bluetooth
*********
For bluetooth, this is an internal USB dongle, so once enabled, you will get
a USB device connection event, and a new USB device appears. When you disable
bluetooth, you get the reverse - a USB device disconnect event, followed by the
device disappearing again.
Bluetooth is autodetected by acer-wmi, so if you do not have a bluetooth module
installed in your laptop, this file won't exist (please be aware that it is
quite common for Acer not to fit bluetooth to their laptops - so just because
you have a bluetooth button on the laptop, doesn't mean that bluetooth is
installed).
For the adventurously minded - if you want to buy an internal bluetooth
module off the internet that is compatible with your laptop and fit it, then
it will work just fine with acer-wmi.
Bluetooth is exposed through rfkill.
3G
**
3G is currently not autodetected, so the 'threeg' file is always created under
sysfs. So far, no-one in possession of an Acer laptop with 3G built-in appears to
have tried Linux, or reported back, so we don't have any information on this.
If you have an Acer laptop that does have a 3G card in, please contact me so we
can properly detect these, and find out a bit more about them.
To read the status of the 3G card (0=off, 1=on):
cat /sys/devices/platform/acer-wmi/threeg
To enable the 3G card:
echo 1 > /sys/devices/platform/acer-wmi/threeg
To disable the 3G card:
echo 0 > /sys/devices/platform/acer-wmi/threeg
To set the state of the 3G card when loading acer-wmi, pass:
threeg=X (where X is 0 or 1)
Mail LED
********
This can be found in most older Acer laptops supported by acer-wmi, and many
newer ones - it is built into the 'mail' button, and blinks when active.
On newer (WMID) laptops though, we have no way of detecting the mail LED. If
your laptop identifies itself in dmesg as a WMID model, then please try loading
acer_acpi with:
force_series=2490
This will use a known alternative method of reading/ writing the mail LED. If
it works, please report back to me with the DMI data from your laptop so this
can be added to acer-wmi.
The LED is exposed through the LED subsystem, and can be found in:
/sys/devices/platform/acer-wmi/leds/acer-wmi::mail/
The mail LED is autodetected, so if you don't have one, the LED device won't
be registered.
Backlight
*********
The backlight brightness control is available on all acer-wmi supported
hardware. The maximum brightness level is usually 15, but on some newer laptops
it's 10 (this is again autodetected).
The backlight is exposed through the backlight subsystem, and can be found in:
/sys/devices/platform/acer-wmi/backlight/acer-wmi/
Credits
*******
Olaf Tauber, who did the real hard work when he developed acerhk
http://www.cakey.de/acerhk/
All the authors of laptop ACPI modules in the kernel, whose work
was an inspiration in the early days of acer_acpi
Mathieu Segaud, who solved the problem with having to modprobe the driver
twice in acer_acpi 0.2.
Jim Ramsay, who added support for the WMID interface
Mark Smith, who started the original acer_acpi
And the many people who have used both acer_acpi and acer-wmi.

View file

@ -12,8 +12,9 @@ Because things like lock contention can severely impact performance.
- HOW
Lockdep already has hooks in the lock functions and maps lock instances to
lock classes. We build on that. The graph below shows the relation between
the lock functions and the various hooks therein.
lock classes. We build on that (see Documentation/lockdep-design.txt).
The graph below shows the relation between the lock functions and the various
hooks therein.
__acquire
|
@ -128,6 +129,37 @@ points are the points we're contending with.
The integer part of the time values is in us.
Dealing with nested locks, subclasses may appear:
32...............................................................................................................................................................................................
33
34 &rq->lock: 13128 13128 0.43 190.53 103881.26 97454 3453404 0.00 401.11 13224683.11
35 ---------
36 &rq->lock 645 [<ffffffff8103bfc4>] task_rq_lock+0x43/0x75
37 &rq->lock 297 [<ffffffff8104ba65>] try_to_wake_up+0x127/0x25a
38 &rq->lock 360 [<ffffffff8103c4c5>] select_task_rq_fair+0x1f0/0x74a
39 &rq->lock 428 [<ffffffff81045f98>] scheduler_tick+0x46/0x1fb
40 ---------
41 &rq->lock 77 [<ffffffff8103bfc4>] task_rq_lock+0x43/0x75
42 &rq->lock 174 [<ffffffff8104ba65>] try_to_wake_up+0x127/0x25a
43 &rq->lock 4715 [<ffffffff8103ed4b>] double_rq_lock+0x42/0x54
44 &rq->lock 893 [<ffffffff81340524>] schedule+0x157/0x7b8
45
46...............................................................................................................................................................................................
47
48 &rq->lock/1: 11526 11488 0.33 388.73 136294.31 21461 38404 0.00 37.93 109388.53
49 -----------
50 &rq->lock/1 11526 [<ffffffff8103ed58>] double_rq_lock+0x4f/0x54
51 -----------
52 &rq->lock/1 5645 [<ffffffff8103ed4b>] double_rq_lock+0x42/0x54
53 &rq->lock/1 1224 [<ffffffff81340524>] schedule+0x157/0x7b8
54 &rq->lock/1 4336 [<ffffffff8103ed58>] double_rq_lock+0x4f/0x54
55 &rq->lock/1 181 [<ffffffff8104ba65>] try_to_wake_up+0x127/0x25a
Line 48 shows statistics for the second subclass (/1) of &rq->lock class
(subclass starts from 0), since in this case, as line 50 suggests,
double_rq_lock actually acquires a nested lock of two spinlocks.
View the top contending locks:
# grep : /proc/lock_stat | head

View file

@ -1,5 +1,5 @@
# This creates the demonstration utility "lguest" which runs a Linux guest.
# Missing headers? Add "-I../../include -I../../arch/x86/include"
# Missing headers? Add "-I../../../include -I../../../arch/x86/include"
CFLAGS:=-m32 -Wall -Wmissing-declarations -Wmissing-prototypes -O3 -U_FORTIFY_SOURCE
all: lguest

View file

@ -49,7 +49,7 @@
#include <linux/virtio_rng.h>
#include <linux/virtio_ring.h>
#include <asm/bootparam.h>
#include "../../include/linux/lguest_launcher.h"
#include "../../../include/linux/lguest_launcher.h"
/*L:110
* We can ignore the 42 include files we need for this program, but I do want
* to draw attention to the use of kernel-style types.
@ -135,9 +135,6 @@ struct device {
/* Is it operational */
bool running;
/* Does Guest want an intrrupt on empty? */
bool irq_on_empty;
/* Device-specific data. */
void *priv;
};
@ -637,10 +634,7 @@ static void trigger_irq(struct virtqueue *vq)
/* If they don't want an interrupt, don't send one... */
if (vq->vring.avail->flags & VRING_AVAIL_F_NO_INTERRUPT) {
/* ... unless they've asked us to force one on empty. */
if (!vq->dev->irq_on_empty
|| lg_last_avail(vq) != vq->vring.avail->idx)
return;
return;
}
/* Send the Guest an interrupt tell them we used something up. */
@ -1057,15 +1051,6 @@ static void create_thread(struct virtqueue *vq)
close(vq->eventfd);
}
static bool accepted_feature(struct device *dev, unsigned int bit)
{
const u8 *features = get_feature_bits(dev) + dev->feature_len;
if (dev->feature_len < bit / CHAR_BIT)
return false;
return features[bit / CHAR_BIT] & (1 << (bit % CHAR_BIT));
}
static void start_device(struct device *dev)
{
unsigned int i;
@ -1079,8 +1064,6 @@ static void start_device(struct device *dev)
verbose(" %02x", get_feature_bits(dev)
[dev->feature_len+i]);
dev->irq_on_empty = accepted_feature(dev, VIRTIO_F_NOTIFY_ON_EMPTY);
for (vq = dev->vq; vq; vq = vq->next) {
if (vq->service)
create_thread(vq);
@ -1564,7 +1547,6 @@ static void setup_tun_net(char *arg)
/* Set up the tun device. */
configure_device(ipfd, tapif, ip);
add_feature(dev, VIRTIO_F_NOTIFY_ON_EMPTY);
/* Expect Guest to handle everything except UFO */
add_feature(dev, VIRTIO_NET_F_CSUM);
add_feature(dev, VIRTIO_NET_F_GUEST_CSUM);

View file

@ -223,10 +223,8 @@ S: Maintained
F: drivers/platform/x86/acerhdf.c
ACER WMI LAPTOP EXTRAS
M: Carlos Corbacho <carlos@strangeworlds.co.uk>
L: aceracpi@googlegroups.com (subscribers-only)
M: Joey Lee <jlee@novell.com>
L: platform-driver-x86@vger.kernel.org
W: http://code.google.com/p/aceracpi
S: Maintained
F: drivers/platform/x86/acer-wmi.c
@ -271,10 +269,8 @@ S: Supported
F: drivers/acpi/video.c
ACPI WMI DRIVER
M: Carlos Corbacho <carlos@strangeworlds.co.uk>
L: platform-driver-x86@vger.kernel.org
W: http://www.lesswatts.org/projects/acpi/
S: Maintained
S: Orphan
F: drivers/platform/x86/wmi.c
AD1889 ALSA SOUND DRIVER
@ -2178,6 +2174,8 @@ M: Dan Williams <dan.j.williams@intel.com>
S: Supported
F: drivers/dma/
F: include/linux/dma*
T: git git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx.git
T: git git://git.infradead.org/users/vkoul/slave-dma.git (slave-dma)
DME1737 HARDWARE MONITOR DRIVER
M: Juerg Haefliger <juergh@gmail.com>
@ -3031,9 +3029,8 @@ S: Maintained
F: drivers/net/wireless/hostap/
HP COMPAQ TC1100 TABLET WMI EXTRAS DRIVER
M: Carlos Corbacho <carlos@strangeworlds.co.uk>
L: platform-driver-x86@vger.kernel.org
S: Odd Fixes
S: Orphan
F: drivers/platform/x86/tc1100-wmi.c
HP100: Driver for HP 10/100 Mbit/s Voice Grade Network Adapter Series
@ -5451,6 +5448,13 @@ L: linux-serial@vger.kernel.org
S: Maintained
F: drivers/tty/serial
SYNOPSYS DESIGNWARE DMAC DRIVER
M: Viresh Kumar <viresh.kumar@st.com>
S: Maintained
F: include/linux/dw_dmac.h
F: drivers/dma/dw_dmac_regs.h
F: drivers/dma/dw_dmac.c
TIMEKEEPING, NTP
M: John Stultz <johnstul@us.ibm.com>
M: Thomas Gleixner <tglx@linutronix.de>

View file

@ -1,8 +1,8 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 39
EXTRAVERSION =
NAME = Flesh-Eating Bats with Fangs
VERSION = 3
PATCHLEVEL = 0
SUBLEVEL = 0
EXTRAVERSION = -rc2
NAME = Sneaky Weasel
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"

View file

@ -189,7 +189,7 @@ static struct dentry *pm_dbg_dir;
static int pm_dbg_init_done;
static int __init pm_dbg_init(void);
static int pm_dbg_init(void);
enum {
DEBUG_FILE_COUNTERS = 0,
@ -595,7 +595,7 @@ static int option_set(void *data, u64 val)
DEFINE_SIMPLE_ATTRIBUTE(pm_dbg_option_fops, option_get, option_set, "%llu\n");
static int __init pm_dbg_init(void)
static int pm_dbg_init(void)
{
int i;
struct dentry *d;

View file

@ -249,6 +249,29 @@ static int slot_cn7_get_cd(struct platform_device *pdev)
{
return !gpio_get_value(GPIO_PORT41);
}
/* MERAM */
static struct sh_mobile_meram_info meram_info = {
.addr_mode = SH_MOBILE_MERAM_MODE1,
};
static struct resource meram_resources[] = {
[0] = {
.name = "MERAM",
.start = 0xe8000000,
.end = 0xe81fffff,
.flags = IORESOURCE_MEM,
},
};
static struct platform_device meram_device = {
.name = "sh_mobile_meram",
.id = 0,
.num_resources = ARRAY_SIZE(meram_resources),
.resource = meram_resources,
.dev = {
.platform_data = &meram_info,
},
};
/* SH_MMCIF */
static struct resource sh_mmcif_resources[] = {
@ -447,13 +470,29 @@ const static struct fb_videomode ap4evb_lcdc_modes[] = {
#endif
},
};
static struct sh_mobile_meram_cfg lcd_meram_cfg = {
.icb[0] = {
.marker_icb = 28,
.cache_icb = 24,
.meram_offset = 0x0,
.meram_size = 0x40,
},
.icb[1] = {
.marker_icb = 29,
.cache_icb = 25,
.meram_offset = 0x40,
.meram_size = 0x40,
},
};
static struct sh_mobile_lcdc_info lcdc_info = {
.meram_dev = &meram_info,
.ch[0] = {
.chan = LCDC_CHAN_MAINLCD,
.bpp = 16,
.lcd_cfg = ap4evb_lcdc_modes,
.num_cfg = ARRAY_SIZE(ap4evb_lcdc_modes),
.meram_cfg = &lcd_meram_cfg,
}
};
@ -724,15 +763,31 @@ static struct platform_device fsi_device = {
static struct platform_device fsi_ak4643_device = {
.name = "sh_fsi2_a_ak4643",
};
static struct sh_mobile_meram_cfg hdmi_meram_cfg = {
.icb[0] = {
.marker_icb = 30,
.cache_icb = 26,
.meram_offset = 0x80,
.meram_size = 0x100,
},
.icb[1] = {
.marker_icb = 31,
.cache_icb = 27,
.meram_offset = 0x180,
.meram_size = 0x100,
},
};
static struct sh_mobile_lcdc_info sh_mobile_lcdc1_info = {
.clock_source = LCDC_CLK_EXTERNAL,
.meram_dev = &meram_info,
.ch[0] = {
.chan = LCDC_CHAN_MAINLCD,
.bpp = 16,
.interface_type = RGB24,
.clock_divider = 1,
.flags = LCDC_FLAGS_DWPOL,
.meram_cfg = &hdmi_meram_cfg,
}
};
@ -961,6 +1016,7 @@ static struct platform_device *ap4evb_devices[] __initdata = {
&csi2_device,
&ceu_device,
&ap4evb_camera,
&meram_device,
};
static void __init hdmi_init_pm_clock(void)

View file

@ -39,6 +39,7 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/physmap.h>
#include <linux/pm_runtime.h>
#include <linux/smsc911x.h>
#include <linux/sh_intc.h>
#include <linux/tca6416_keypad.h>
@ -314,6 +315,30 @@ static struct platform_device smc911x_device = {
},
};
/* MERAM */
static struct sh_mobile_meram_info mackerel_meram_info = {
.addr_mode = SH_MOBILE_MERAM_MODE1,
};
static struct resource meram_resources[] = {
[0] = {
.name = "MERAM",
.start = 0xe8000000,
.end = 0xe81fffff,
.flags = IORESOURCE_MEM,
},
};
static struct platform_device meram_device = {
.name = "sh_mobile_meram",
.id = 0,
.num_resources = ARRAY_SIZE(meram_resources),
.resource = meram_resources,
.dev = {
.platform_data = &mackerel_meram_info,
},
};
/* LCDC */
static struct fb_videomode mackerel_lcdc_modes[] = {
{
@ -342,7 +367,23 @@ static int mackerel_get_brightness(void *board_data)
return gpio_get_value(GPIO_PORT31);
}
static struct sh_mobile_meram_cfg lcd_meram_cfg = {
.icb[0] = {
.marker_icb = 28,
.cache_icb = 24,
.meram_offset = 0x0,
.meram_size = 0x40,
},
.icb[1] = {
.marker_icb = 29,
.cache_icb = 25,
.meram_offset = 0x40,
.meram_size = 0x40,
},
};
static struct sh_mobile_lcdc_info lcdc_info = {
.meram_dev = &mackerel_meram_info,
.clock_source = LCDC_CLK_BUS,
.ch[0] = {
.chan = LCDC_CHAN_MAINLCD,
@ -362,6 +403,7 @@ static struct sh_mobile_lcdc_info lcdc_info = {
.name = "sh_mobile_lcdc_bl",
.max_brightness = 1,
},
.meram_cfg = &lcd_meram_cfg,
}
};
@ -388,8 +430,23 @@ static struct platform_device lcdc_device = {
},
};
static struct sh_mobile_meram_cfg hdmi_meram_cfg = {
.icb[0] = {
.marker_icb = 30,
.cache_icb = 26,
.meram_offset = 0x80,
.meram_size = 0x100,
},
.icb[1] = {
.marker_icb = 31,
.cache_icb = 27,
.meram_offset = 0x180,
.meram_size = 0x100,
},
};
/* HDMI */
static struct sh_mobile_lcdc_info hdmi_lcdc_info = {
.meram_dev = &mackerel_meram_info,
.clock_source = LCDC_CLK_EXTERNAL,
.ch[0] = {
.chan = LCDC_CHAN_MAINLCD,
@ -397,6 +454,7 @@ static struct sh_mobile_lcdc_info hdmi_lcdc_info = {
.interface_type = RGB24,
.clock_divider = 1,
.flags = LCDC_FLAGS_DWPOL,
.meram_cfg = &hdmi_meram_cfg,
}
};
@ -856,6 +914,17 @@ static int slot_cn7_get_cd(struct platform_device *pdev)
}
/* SDHI0 */
static irqreturn_t mackerel_sdhi0_gpio_cd(int irq, void *arg)
{
struct device *dev = arg;
struct sh_mobile_sdhi_info *info = dev->platform_data;
struct tmio_mmc_data *pdata = info->pdata;
tmio_mmc_cd_wakeup(pdata);
return IRQ_HANDLED;
}
static struct sh_mobile_sdhi_info sdhi0_info = {
.dma_slave_tx = SHDMA_SLAVE_SDHI0_TX,
.dma_slave_rx = SHDMA_SLAVE_SDHI0_RX,
@ -1150,6 +1219,7 @@ static struct platform_device *mackerel_devices[] __initdata = {
&mackerel_camera,
&hdmi_lcdc_device,
&hdmi_device,
&meram_device,
};
/* Keypad Initialization */
@ -1238,6 +1308,7 @@ static void __init mackerel_init(void)
{
u32 srcr4;
struct clk *clk;
int ret;
sh7372_pinmux_init();
@ -1343,6 +1414,13 @@ static void __init mackerel_init(void)
gpio_request(GPIO_FN_SDHID0_1, NULL);
gpio_request(GPIO_FN_SDHID0_0, NULL);
ret = request_irq(evt2irq(0x3340), mackerel_sdhi0_gpio_cd,
IRQF_TRIGGER_FALLING, "sdhi0 cd", &sdhi0_device.dev);
if (!ret)
sdhi0_info.tmio_flags |= TMIO_MMC_HAS_COLD_CD;
else
pr_err("Cannot get IRQ #%d: %d\n", evt2irq(0x3340), ret);
#if !defined(CONFIG_MMC_SH_MMCIF) && !defined(CONFIG_MMC_SH_MMCIF_MODULE)
/* enable SDHI1 */
gpio_request(GPIO_FN_SDHICMD1, NULL);

View file

@ -509,6 +509,7 @@ enum { MSTP001,
MSTP118, MSTP117, MSTP116, MSTP113,
MSTP106, MSTP101, MSTP100,
MSTP223,
MSTP218, MSTP217, MSTP216,
MSTP207, MSTP206, MSTP204, MSTP203, MSTP202, MSTP201, MSTP200,
MSTP329, MSTP328, MSTP323, MSTP322, MSTP314, MSTP313, MSTP312,
MSTP423, MSTP415, MSTP413, MSTP411, MSTP410, MSTP406, MSTP403,
@ -534,6 +535,9 @@ static struct clk mstp_clks[MSTP_NR] = {
[MSTP101] = MSTP(&div4_clks[DIV4_M1], SMSTPCR1, 1, 0), /* VPU */
[MSTP100] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 0, 0), /* LCDC0 */
[MSTP223] = MSTP(&div6_clks[DIV6_SPU], SMSTPCR2, 23, 0), /* SPU2 */
[MSTP218] = MSTP(&div4_clks[DIV4_HP], SMSTPCR2, 18, 0), /* DMAC1 */
[MSTP217] = MSTP(&div4_clks[DIV4_HP], SMSTPCR2, 17, 0), /* DMAC2 */
[MSTP216] = MSTP(&div4_clks[DIV4_HP], SMSTPCR2, 16, 0), /* DMAC3 */
[MSTP207] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 7, 0), /* SCIFA5 */
[MSTP206] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 6, 0), /* SCIFB */
[MSTP204] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR2, 4, 0), /* SCIFA0 */
@ -626,6 +630,9 @@ static struct clk_lookup lookups[] = {
CLKDEV_DEV_ID("sh_mobile_lcdc_fb.0", &mstp_clks[MSTP100]), /* LCDC0 */
CLKDEV_DEV_ID("uio_pdrv_genirq.6", &mstp_clks[MSTP223]), /* SPU2DSP0 */
CLKDEV_DEV_ID("uio_pdrv_genirq.7", &mstp_clks[MSTP223]), /* SPU2DSP1 */
CLKDEV_DEV_ID("sh-dma-engine.0", &mstp_clks[MSTP218]), /* DMAC1 */
CLKDEV_DEV_ID("sh-dma-engine.1", &mstp_clks[MSTP217]), /* DMAC2 */
CLKDEV_DEV_ID("sh-dma-engine.2", &mstp_clks[MSTP216]), /* DMAC3 */
CLKDEV_DEV_ID("sh-sci.5", &mstp_clks[MSTP207]), /* SCIFA5 */
CLKDEV_DEV_ID("sh-sci.6", &mstp_clks[MSTP206]), /* SCIFB */
CLKDEV_DEV_ID("sh-sci.0", &mstp_clks[MSTP204]), /* SCIFA0 */

View file

@ -24,6 +24,8 @@
#include <mach/irqs.h>
#include "board-harmony.h"
#define PMC_CTRL 0x0
#define PMC_CTRL_INTR_LOW (1 << 17)
@ -98,7 +100,7 @@ static struct tps6586x_platform_data tps_platform = {
.irq_base = TEGRA_NR_IRQS,
.num_subdevs = ARRAY_SIZE(tps_devs),
.subdevs = tps_devs,
.gpio_base = TEGRA_NR_GPIOS,
.gpio_base = HARMONY_GPIO_TPS6586X(0),
};
static struct i2c_board_info __initdata harmony_regulators[] = {

View file

@ -17,7 +17,8 @@
#ifndef _MACH_TEGRA_BOARD_HARMONY_H
#define _MACH_TEGRA_BOARD_HARMONY_H
#define HARMONY_GPIO_WM8903(_x_) (TEGRA_NR_GPIOS + (_x_))
#define HARMONY_GPIO_TPS6586X(_x_) (TEGRA_NR_GPIOS + (_x_))
#define HARMONY_GPIO_WM8903(_x_) (HARMONY_GPIO_TPS6586X(4) + (_x_))
#define TEGRA_GPIO_SD2_CD TEGRA_GPIO_PI5
#define TEGRA_GPIO_SD2_WP TEGRA_GPIO_PH1

View file

@ -84,6 +84,7 @@
#include <linux/io.h>
#include <linux/clk.h>
#include <linux/clkdev.h>
#include <linux/pm_runtime.h>
#include <plat/omap_device.h>
#include <plat/omap_hwmod.h>
@ -539,20 +540,34 @@ int omap_early_device_register(struct omap_device *od)
static int _od_runtime_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
int ret;
return omap_device_idle(pdev);
ret = pm_generic_runtime_suspend(dev);
if (!ret)
omap_device_idle(pdev);
return ret;
}
static int _od_runtime_idle(struct device *dev)
{
return pm_generic_runtime_idle(dev);
}
static int _od_runtime_resume(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
return omap_device_enable(pdev);
omap_device_enable(pdev);
return pm_generic_runtime_resume(dev);
}
static struct dev_power_domain omap_device_power_domain = {
.ops = {
.runtime_suspend = _od_runtime_suspend,
.runtime_idle = _od_runtime_idle,
.runtime_resume = _od_runtime_resume,
USE_PLATFORM_PM_SLEEP_OPS
}

View file

@ -184,7 +184,7 @@ struct bfin_uart_regs {
#undef __BFP
#ifndef port_membase
# define port_membase(p) (((struct bfin_serial_port *)(p))->port.membase)
# define port_membase(p) 0
#endif
#define UART_GET_CHAR(p) bfin_read16(port_membase(p) + OFFSET_RBR)
@ -235,10 +235,10 @@ struct bfin_uart_regs {
#define UART_SET_DLAB(p) do { UART_PUT_LCR(p, UART_GET_LCR(p) | DLAB); SSYNC(); } while (0)
#ifndef put_lsr_cache
# define put_lsr_cache(p, v) (((struct bfin_serial_port *)(p))->lsr = (v))
# define put_lsr_cache(p, v)
#endif
#ifndef get_lsr_cache
# define get_lsr_cache(p) (((struct bfin_serial_port *)(p))->lsr)
# define get_lsr_cache(p) 0
#endif
/* The hardware clears the LSR bits upon read, so we need to cache

View file

@ -193,4 +193,22 @@ uint16_t get_enabled_gptimers(void);
uint32_t get_gptimer_status(unsigned int group);
void set_gptimer_status(unsigned int group, uint32_t value);
/*
* All Blackfin system MMRs are padded to 32bits even if the register
* itself is only 16bits. So use a helper macro to streamline this.
*/
#define __BFP(m) u16 m; u16 __pad_##m
/*
* bfin timer registers layout
*/
struct bfin_gptimer_regs {
__BFP(config);
u32 counter;
u32 period;
u32 width;
};
#undef __BFP
#endif

View file

@ -398,8 +398,9 @@
#define __NR_clock_adjtime 377
#define __NR_syncfs 378
#define __NR_setns 379
#define __NR_sendmmsg 380
#define __NR_syscall 380
#define __NR_syscall 381
#define NR_syscalls __NR_syscall
/* Old optional stuff no one actually uses */

View file

@ -13,6 +13,7 @@
#include <asm/blackfin.h>
#include <asm/gpio.h>
#include <asm/gptimers.h>
#include <asm/bfin_can.h>
#include <asm/bfin_dma.h>
#include <asm/bfin_ppi.h>
@ -230,8 +231,8 @@ bfin_debug_mmrs_dma(struct dentry *parent, unsigned long base, int num, char mdm
#define DMA(num) _DMA(num, DMA##num##_NEXT_DESC_PTR, 0, "")
#define _MDMA(num, x) \
do { \
_DMA(num, x##DMA_D##num##_CONFIG, 'D', #x); \
_DMA(num, x##DMA_S##num##_CONFIG, 'S', #x); \
_DMA(num, x##DMA_D##num##_NEXT_DESC_PTR, 'D', #x); \
_DMA(num, x##DMA_S##num##_NEXT_DESC_PTR, 'S', #x); \
} while (0)
#define MDMA(num) _MDMA(num, M)
#define IMDMA(num) _MDMA(num, IM)
@ -264,20 +265,15 @@ bfin_debug_mmrs_eppi(struct dentry *parent, unsigned long base, int num)
/*
* General Purpose Timers
*/
#define GPTIMER_OFF(mmr) (TIMER0_##mmr - TIMER0_CONFIG)
#define __GPTIMER(name) \
do { \
strcpy(_buf, #name); \
debugfs_create_x16(buf, S_IRUSR|S_IWUSR, parent, (u16 *)(base + GPTIMER_OFF(name))); \
} while (0)
#define __GPTIMER(uname, lname) __REGS(gptimer, #uname, lname)
static void __init __maybe_unused
bfin_debug_mmrs_gptimer(struct dentry *parent, unsigned long base, int num)
{
char buf[32], *_buf = REGS_STR_PFX(buf, TIMER, num);
__GPTIMER(CONFIG);
__GPTIMER(COUNTER);
__GPTIMER(PERIOD);
__GPTIMER(WIDTH);
__GPTIMER(CONFIG, config);
__GPTIMER(COUNTER, counter);
__GPTIMER(PERIOD, period);
__GPTIMER(WIDTH, width);
}
#define GPTIMER(num) bfin_debug_mmrs_gptimer(parent, TIMER##num##_CONFIG, num)
@ -355,7 +351,7 @@ bfin_debug_mmrs_ppi(struct dentry *parent, unsigned long base, int num)
__PPI(DELAY, delay);
__PPI(FRAME, frame);
}
#define PPI(num) bfin_debug_mmrs_ppi(parent, PPI##num##_STATUS, num)
#define PPI(num) bfin_debug_mmrs_ppi(parent, PPI##num##_CONTROL, num)
/*
* SPI
@ -1288,15 +1284,15 @@ static int __init bfin_debug_mmrs_init(void)
D16(VR_CTL);
D32(CHIPID); /* it's part of this hardware block */
#if defined(PPI_STATUS) || defined(PPI0_STATUS) || defined(PPI1_STATUS)
#if defined(PPI_CONTROL) || defined(PPI0_CONTROL) || defined(PPI1_CONTROL)
parent = debugfs_create_dir("ppi", top);
# ifdef PPI_STATUS
bfin_debug_mmrs_ppi(parent, PPI_STATUS, -1);
# ifdef PPI_CONTROL
bfin_debug_mmrs_ppi(parent, PPI_CONTROL, -1);
# endif
# ifdef PPI0_STATUS
# ifdef PPI0_CONTROL
PPI(0);
# endif
# ifdef PPI1_STATUS
# ifdef PPI1_CONTROL
PPI(1);
# endif
#endif
@ -1341,6 +1337,10 @@ static int __init bfin_debug_mmrs_init(void)
D16(RSI_PID1);
D16(RSI_PID2);
D16(RSI_PID3);
D16(RSI_PID4);
D16(RSI_PID5);
D16(RSI_PID6);
D16(RSI_PID7);
D16(RSI_PWR_CONTROL);
D16(RSI_RD_WAIT_EN);
D32(RSI_RESPONSE0);

View file

@ -25,7 +25,7 @@
ENTRY(_strncpy)
CC = R2 == 0;
if CC JUMP 4f;
if CC JUMP 6f;
P2 = R2 ; /* size */
P0 = R0 ; /* dst*/

View file

@ -1,79 +0,0 @@
/*
* Copyright 2008-2009 Analog Devices Inc.
*
* Licensed under the GPL-2 or later
*/
#include <asm/dma.h>
#include <asm/portmux.h>
#if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS)
# define CONFIG_SERIAL_BFIN_CTSRTS
# ifndef CONFIG_UART0_CTS_PIN
# define CONFIG_UART0_CTS_PIN -1
# endif
# ifndef CONFIG_UART0_RTS_PIN
# define CONFIG_UART0_RTS_PIN -1
# endif
# ifndef CONFIG_UART1_CTS_PIN
# define CONFIG_UART1_CTS_PIN -1
# endif
# ifndef CONFIG_UART1_RTS_PIN
# define CONFIG_UART1_RTS_PIN -1
# endif
#endif
struct bfin_serial_res {
unsigned long uart_base_addr;
int uart_irq;
int uart_status_irq;
#ifdef CONFIG_SERIAL_BFIN_DMA
unsigned int uart_tx_dma_channel;
unsigned int uart_rx_dma_channel;
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
int uart_cts_pin;
int uart_rts_pin;
#endif
};
struct bfin_serial_res bfin_serial_resource[] = {
#ifdef CONFIG_SERIAL_BFIN_UART0
{
0xFFC00400,
IRQ_UART0_RX,
IRQ_UART0_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART0_TX,
CH_UART0_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
},
#endif
#ifdef CONFIG_SERIAL_BFIN_UART1
{
0xFFC02000,
IRQ_UART1_RX,
IRQ_UART1_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART1_TX,
CH_UART1_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART1_CTS_PIN,
CONFIG_UART1_RTS_PIN,
#endif
},
#endif
};
#define DRIVER_NAME "bfin-uart"
#include <asm/bfin_serial.h>

View file

@ -36,13 +36,13 @@
#define RSI_EMASK 0xFFC038C4 /* RSI Exception Mask Register */
#define RSI_CONFIG 0xFFC038C8 /* RSI Configuration Register */
#define RSI_RD_WAIT_EN 0xFFC038CC /* RSI Read Wait Enable Register */
#define RSI_PID0 0xFFC03FE0 /* RSI Peripheral ID Register 0 */
#define RSI_PID1 0xFFC03FE4 /* RSI Peripheral ID Register 1 */
#define RSI_PID2 0xFFC03FE8 /* RSI Peripheral ID Register 2 */
#define RSI_PID3 0xFFC03FEC /* RSI Peripheral ID Register 3 */
#define RSI_PID4 0xFFC03FF0 /* RSI Peripheral ID Register 4 */
#define RSI_PID5 0xFFC03FF4 /* RSI Peripheral ID Register 5 */
#define RSI_PID6 0xFFC03FF8 /* RSI Peripheral ID Register 6 */
#define RSI_PID7 0xFFC03FFC /* RSI Peripheral ID Register 7 */
#define RSI_PID0 0xFFC038D0 /* RSI Peripheral ID Register 0 */
#define RSI_PID1 0xFFC038D4 /* RSI Peripheral ID Register 1 */
#define RSI_PID2 0xFFC038D8 /* RSI Peripheral ID Register 2 */
#define RSI_PID3 0xFFC038DC /* RSI Peripheral ID Register 3 */
#define RSI_PID4 0xFFC038E0 /* RSI Peripheral ID Register 0 */
#define RSI_PID5 0xFFC038E4 /* RSI Peripheral ID Register 1 */
#define RSI_PID6 0xFFC038E8 /* RSI Peripheral ID Register 2 */
#define RSI_PID7 0xFFC038EC /* RSI Peripheral ID Register 3 */
#endif /* _DEF_BF514_H */

View file

@ -1,79 +0,0 @@
/*
* Copyright 2007-2009 Analog Devices Inc.
*
* Licensed under the GPL-2 or later
*/
#include <asm/dma.h>
#include <asm/portmux.h>
#if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS)
# define CONFIG_SERIAL_BFIN_CTSRTS
# ifndef CONFIG_UART0_CTS_PIN
# define CONFIG_UART0_CTS_PIN -1
# endif
# ifndef CONFIG_UART0_RTS_PIN
# define CONFIG_UART0_RTS_PIN -1
# endif
# ifndef CONFIG_UART1_CTS_PIN
# define CONFIG_UART1_CTS_PIN -1
# endif
# ifndef CONFIG_UART1_RTS_PIN
# define CONFIG_UART1_RTS_PIN -1
# endif
#endif
struct bfin_serial_res {
unsigned long uart_base_addr;
int uart_irq;
int uart_status_irq;
#ifdef CONFIG_SERIAL_BFIN_DMA
unsigned int uart_tx_dma_channel;
unsigned int uart_rx_dma_channel;
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
int uart_cts_pin;
int uart_rts_pin;
#endif
};
struct bfin_serial_res bfin_serial_resource[] = {
#ifdef CONFIG_SERIAL_BFIN_UART0
{
0xFFC00400,
IRQ_UART0_RX,
IRQ_UART0_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART0_TX,
CH_UART0_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
},
#endif
#ifdef CONFIG_SERIAL_BFIN_UART1
{
0xFFC02000,
IRQ_UART1_RX,
IRQ_UART1_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART1_TX,
CH_UART1_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART1_CTS_PIN,
CONFIG_UART1_RTS_PIN,
#endif
},
#endif
};
#define DRIVER_NAME "bfin-uart"
#include <asm/bfin_serial.h>

View file

@ -185,8 +185,8 @@
#define USB_EP_NI7_TXTYPE 0xffc03bd4 /* Sets the transaction protocol and peripheral endpoint number for the Host Tx endpoint7 */
#define USB_EP_NI7_TXINTERVAL 0xffc03bd8 /* Sets the NAK response timeout on Endpoint7 */
#define USB_EP_NI7_RXTYPE 0xffc03bdc /* Sets the transaction protocol and peripheral endpoint number for the Host Rx endpoint7 */
#define USB_EP_NI7_RXINTERVAL 0xffc03bf0 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint7 */
#define USB_EP_NI7_TXCOUNT 0xffc03bf8 /* Number of bytes to be written to the endpoint7 Tx FIFO */
#define USB_EP_NI7_RXINTERVAL 0xffc03be0 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint7 */
#define USB_EP_NI7_TXCOUNT 0xffc03be8 /* Number of bytes to be written to the endpoint7 Tx FIFO */
#define USB_DMA_INTERRUPT 0xffc03c00 /* Indicates pending interrupts for the DMA channels */

View file

@ -1,52 +0,0 @@
/*
* Copyright 2006-2009 Analog Devices Inc.
*
* Licensed under the GPL-2 or later
*/
#include <asm/dma.h>
#include <asm/portmux.h>
#ifdef CONFIG_BFIN_UART0_CTSRTS
# define CONFIG_SERIAL_BFIN_CTSRTS
# ifndef CONFIG_UART0_CTS_PIN
# define CONFIG_UART0_CTS_PIN -1
# endif
# ifndef CONFIG_UART0_RTS_PIN
# define CONFIG_UART0_RTS_PIN -1
# endif
#endif
struct bfin_serial_res {
unsigned long uart_base_addr;
int uart_irq;
int uart_status_irq;
#ifdef CONFIG_SERIAL_BFIN_DMA
unsigned int uart_tx_dma_channel;
unsigned int uart_rx_dma_channel;
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
int uart_cts_pin;
int uart_rts_pin;
#endif
};
struct bfin_serial_res bfin_serial_resource[] = {
{
0xFFC00400,
IRQ_UART0_RX,
IRQ_UART0_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART0_TX,
CH_UART0_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
}
};
#define DRIVER_NAME "bfin-uart"
#include <asm/bfin_serial.h>

View file

@ -1,79 +0,0 @@
/*
* Copyright 2006-2009 Analog Devices Inc.
*
* Licensed under the GPL-2 or later
*/
#include <asm/dma.h>
#include <asm/portmux.h>
#if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS)
# define CONFIG_SERIAL_BFIN_CTSRTS
# ifndef CONFIG_UART0_CTS_PIN
# define CONFIG_UART0_CTS_PIN -1
# endif
# ifndef CONFIG_UART0_RTS_PIN
# define CONFIG_UART0_RTS_PIN -1
# endif
# ifndef CONFIG_UART1_CTS_PIN
# define CONFIG_UART1_CTS_PIN -1
# endif
# ifndef CONFIG_UART1_RTS_PIN
# define CONFIG_UART1_RTS_PIN -1
# endif
#endif
struct bfin_serial_res {
unsigned long uart_base_addr;
int uart_irq;
int uart_status_irq;
#ifdef CONFIG_SERIAL_BFIN_DMA
unsigned int uart_tx_dma_channel;
unsigned int uart_rx_dma_channel;
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
int uart_cts_pin;
int uart_rts_pin;
#endif
};
struct bfin_serial_res bfin_serial_resource[] = {
#ifdef CONFIG_SERIAL_BFIN_UART0
{
0xFFC00400,
IRQ_UART0_RX,
IRQ_UART0_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART0_TX,
CH_UART0_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
},
#endif
#ifdef CONFIG_SERIAL_BFIN_UART1
{
0xFFC02000,
IRQ_UART1_RX,
IRQ_UART1_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART1_TX,
CH_UART1_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART1_CTS_PIN,
CONFIG_UART1_RTS_PIN,
#endif
},
#endif
};
#define DRIVER_NAME "bfin-uart"
#include <asm/bfin_serial.h>

View file

@ -1,93 +0,0 @@
/*
* Copyright 2008-2009 Analog Devices Inc.
*
* Licensed under the GPL-2 or later.
*/
#include <asm/dma.h>
#include <asm/portmux.h>
#if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS)
# define CONFIG_SERIAL_BFIN_CTSRTS
# ifndef CONFIG_UART0_CTS_PIN
# define CONFIG_UART0_CTS_PIN -1
# endif
# ifndef CONFIG_UART0_RTS_PIN
# define CONFIG_UART0_RTS_PIN -1
# endif
# ifndef CONFIG_UART1_CTS_PIN
# define CONFIG_UART1_CTS_PIN -1
# endif
# ifndef CONFIG_UART1_RTS_PIN
# define CONFIG_UART1_RTS_PIN -1
# endif
#endif
struct bfin_serial_res {
unsigned long uart_base_addr;
int uart_irq;
int uart_status_irq;
#ifdef CONFIG_SERIAL_BFIN_DMA
unsigned int uart_tx_dma_channel;
unsigned int uart_rx_dma_channel;
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
int uart_cts_pin;
int uart_rts_pin;
#endif
};
struct bfin_serial_res bfin_serial_resource[] = {
#ifdef CONFIG_SERIAL_BFIN_UART0
{
0xFFC00400,
IRQ_UART0_RX,
IRQ_UART0_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART0_TX,
CH_UART0_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
},
#endif
#ifdef CONFIG_SERIAL_BFIN_UART1
{
0xFFC02000,
IRQ_UART1_RX,
IRQ_UART1_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART1_TX,
CH_UART1_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART1_CTS_PIN,
CONFIG_UART1_RTS_PIN,
#endif
},
#endif
#ifdef CONFIG_SERIAL_BFIN_UART2
{
0xFFC02100,
IRQ_UART2_RX,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART2_TX,
CH_UART2_RX,
#endif
#ifdef CONFIG_BFIN_UART2_CTSRTS
CONFIG_UART2_CTS_PIN,
CONFIG_UART2_RTS_PIN,
#endif
},
#endif
};
#define DRIVER_NAME "bfin-uart"
#include <asm/bfin_serial.h>

View file

@ -1,94 +0,0 @@
/*
* Copyright 2007-2009 Analog Devices Inc.
*
* Licensed under the GPL-2 or later.
*/
#include <asm/dma.h>
#include <asm/portmux.h>
#if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS) || \
defined(CONFIG_BFIN_UART2_CTSRTS) || defined(CONFIG_BFIN_UART3_CTSRTS)
# define CONFIG_SERIAL_BFIN_HARD_CTSRTS
#endif
struct bfin_serial_res {
unsigned long uart_base_addr;
int uart_irq;
int uart_status_irq;
#ifdef CONFIG_SERIAL_BFIN_DMA
unsigned int uart_tx_dma_channel;
unsigned int uart_rx_dma_channel;
#endif
#ifdef CONFIG_SERIAL_BFIN_HARD_CTSRTS
int uart_cts_pin;
int uart_rts_pin;
#endif
};
struct bfin_serial_res bfin_serial_resource[] = {
#ifdef CONFIG_SERIAL_BFIN_UART0
{
0xFFC00400,
IRQ_UART0_RX,
IRQ_UART0_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART0_TX,
CH_UART0_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_HARD_CTSRTS
0,
0,
#endif
},
#endif
#ifdef CONFIG_SERIAL_BFIN_UART1
{
0xFFC02000,
IRQ_UART1_RX,
IRQ_UART1_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART1_TX,
CH_UART1_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_HARD_CTSRTS
GPIO_PE10,
GPIO_PE9,
#endif
},
#endif
#ifdef CONFIG_SERIAL_BFIN_UART2
{
0xFFC02100,
IRQ_UART2_RX,
IRQ_UART2_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART2_TX,
CH_UART2_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_HARD_CTSRTS
0,
0,
#endif
},
#endif
#ifdef CONFIG_SERIAL_BFIN_UART3
{
0xFFC03100,
IRQ_UART3_RX,
IRQ_UART3_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART3_TX,
CH_UART3_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_HARD_CTSRTS
GPIO_PB3,
GPIO_PB2,
#endif
},
#endif
};
#define DRIVER_NAME "bfin-uart"
#include <asm/bfin_serial.h>

View file

@ -271,10 +271,10 @@
#define USB_EP_NI0_TXINTERVAL 0xffc03e18 /* Sets the NAK response timeout on Endpoint 0 */
#define USB_EP_NI0_RXTYPE 0xffc03e1c /* Sets the transaction protocol and peripheral endpoint number for the Host Rx endpoint0 */
#define USB_EP_NI0_RXINTERVAL 0xffc03e20 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint0 */
#define USB_EP_NI0_TXCOUNT 0xffc03e28 /* Number of bytes to be written to the endpoint0 Tx FIFO */
/* USB Endpoint 1 Control Registers */
#define USB_EP_NI0_TXCOUNT 0xffc03e28 /* Number of bytes to be written to the endpoint0 Tx FIFO */
#define USB_EP_NI1_TXMAXP 0xffc03e40 /* Maximum packet size for Host Tx endpoint1 */
#define USB_EP_NI1_TXCSR 0xffc03e44 /* Control Status register for endpoint1 */
#define USB_EP_NI1_RXMAXP 0xffc03e48 /* Maximum packet size for Host Rx endpoint1 */
@ -284,10 +284,10 @@
#define USB_EP_NI1_TXINTERVAL 0xffc03e58 /* Sets the NAK response timeout on Endpoint1 */
#define USB_EP_NI1_RXTYPE 0xffc03e5c /* Sets the transaction protocol and peripheral endpoint number for the Host Rx endpoint1 */
#define USB_EP_NI1_RXINTERVAL 0xffc03e60 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint1 */
#define USB_EP_NI1_TXCOUNT 0xffc03e68 /* Number of bytes to be written to the+H102 endpoint1 Tx FIFO */
/* USB Endpoint 2 Control Registers */
#define USB_EP_NI1_TXCOUNT 0xffc03e68 /* Number of bytes to be written to the+H102 endpoint1 Tx FIFO */
#define USB_EP_NI2_TXMAXP 0xffc03e80 /* Maximum packet size for Host Tx endpoint2 */
#define USB_EP_NI2_TXCSR 0xffc03e84 /* Control Status register for endpoint2 */
#define USB_EP_NI2_RXMAXP 0xffc03e88 /* Maximum packet size for Host Rx endpoint2 */
@ -297,10 +297,10 @@
#define USB_EP_NI2_TXINTERVAL 0xffc03e98 /* Sets the NAK response timeout on Endpoint2 */
#define USB_EP_NI2_RXTYPE 0xffc03e9c /* Sets the transaction protocol and peripheral endpoint number for the Host Rx endpoint2 */
#define USB_EP_NI2_RXINTERVAL 0xffc03ea0 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint2 */
#define USB_EP_NI2_TXCOUNT 0xffc03ea8 /* Number of bytes to be written to the endpoint2 Tx FIFO */
/* USB Endpoint 3 Control Registers */
#define USB_EP_NI2_TXCOUNT 0xffc03ea8 /* Number of bytes to be written to the endpoint2 Tx FIFO */
#define USB_EP_NI3_TXMAXP 0xffc03ec0 /* Maximum packet size for Host Tx endpoint3 */
#define USB_EP_NI3_TXCSR 0xffc03ec4 /* Control Status register for endpoint3 */
#define USB_EP_NI3_RXMAXP 0xffc03ec8 /* Maximum packet size for Host Rx endpoint3 */
@ -310,10 +310,10 @@
#define USB_EP_NI3_TXINTERVAL 0xffc03ed8 /* Sets the NAK response timeout on Endpoint3 */
#define USB_EP_NI3_RXTYPE 0xffc03edc /* Sets the transaction protocol and peripheral endpoint number for the Host Rx endpoint3 */
#define USB_EP_NI3_RXINTERVAL 0xffc03ee0 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint3 */
#define USB_EP_NI3_TXCOUNT 0xffc03ee8 /* Number of bytes to be written to the H124endpoint3 Tx FIFO */
/* USB Endpoint 4 Control Registers */
#define USB_EP_NI3_TXCOUNT 0xffc03ee8 /* Number of bytes to be written to the H124endpoint3 Tx FIFO */
#define USB_EP_NI4_TXMAXP 0xffc03f00 /* Maximum packet size for Host Tx endpoint4 */
#define USB_EP_NI4_TXCSR 0xffc03f04 /* Control Status register for endpoint4 */
#define USB_EP_NI4_RXMAXP 0xffc03f08 /* Maximum packet size for Host Rx endpoint4 */
@ -323,10 +323,10 @@
#define USB_EP_NI4_TXINTERVAL 0xffc03f18 /* Sets the NAK response timeout on Endpoint4 */
#define USB_EP_NI4_RXTYPE 0xffc03f1c /* Sets the transaction protocol and peripheral endpoint number for the Host Rx endpoint4 */
#define USB_EP_NI4_RXINTERVAL 0xffc03f20 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint4 */
#define USB_EP_NI4_TXCOUNT 0xffc03f28 /* Number of bytes to be written to the endpoint4 Tx FIFO */
/* USB Endpoint 5 Control Registers */
#define USB_EP_NI4_TXCOUNT 0xffc03f28 /* Number of bytes to be written to the endpoint4 Tx FIFO */
#define USB_EP_NI5_TXMAXP 0xffc03f40 /* Maximum packet size for Host Tx endpoint5 */
#define USB_EP_NI5_TXCSR 0xffc03f44 /* Control Status register for endpoint5 */
#define USB_EP_NI5_RXMAXP 0xffc03f48 /* Maximum packet size for Host Rx endpoint5 */
@ -336,10 +336,10 @@
#define USB_EP_NI5_TXINTERVAL 0xffc03f58 /* Sets the NAK response timeout on Endpoint5 */
#define USB_EP_NI5_RXTYPE 0xffc03f5c /* Sets the transaction protocol and peripheral endpoint number for the Host Rx endpoint5 */
#define USB_EP_NI5_RXINTERVAL 0xffc03f60 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint5 */
#define USB_EP_NI5_TXCOUNT 0xffc03f68 /* Number of bytes to be written to the H145endpoint5 Tx FIFO */
/* USB Endpoint 6 Control Registers */
#define USB_EP_NI5_TXCOUNT 0xffc03f68 /* Number of bytes to be written to the H145endpoint5 Tx FIFO */
#define USB_EP_NI6_TXMAXP 0xffc03f80 /* Maximum packet size for Host Tx endpoint6 */
#define USB_EP_NI6_TXCSR 0xffc03f84 /* Control Status register for endpoint6 */
#define USB_EP_NI6_RXMAXP 0xffc03f88 /* Maximum packet size for Host Rx endpoint6 */
@ -349,10 +349,10 @@
#define USB_EP_NI6_TXINTERVAL 0xffc03f98 /* Sets the NAK response timeout on Endpoint6 */
#define USB_EP_NI6_RXTYPE 0xffc03f9c /* Sets the transaction protocol and peripheral endpoint number for the Host Rx endpoint6 */
#define USB_EP_NI6_RXINTERVAL 0xffc03fa0 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint6 */
#define USB_EP_NI6_TXCOUNT 0xffc03fa8 /* Number of bytes to be written to the endpoint6 Tx FIFO */
/* USB Endpoint 7 Control Registers */
#define USB_EP_NI6_TXCOUNT 0xffc03fa8 /* Number of bytes to be written to the endpoint6 Tx FIFO */
#define USB_EP_NI7_TXMAXP 0xffc03fc0 /* Maximum packet size for Host Tx endpoint7 */
#define USB_EP_NI7_TXCSR 0xffc03fc4 /* Control Status register for endpoint7 */
#define USB_EP_NI7_RXMAXP 0xffc03fc8 /* Maximum packet size for Host Rx endpoint7 */
@ -361,8 +361,9 @@
#define USB_EP_NI7_TXTYPE 0xffc03fd4 /* Sets the transaction protocol and peripheral endpoint number for the Host Tx endpoint7 */
#define USB_EP_NI7_TXINTERVAL 0xffc03fd8 /* Sets the NAK response timeout on Endpoint7 */
#define USB_EP_NI7_RXTYPE 0xffc03fdc /* Sets the transaction protocol and peripheral endpoint number for the Host Rx endpoint7 */
#define USB_EP_NI7_RXINTERVAL 0xffc03ff0 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint7 */
#define USB_EP_NI7_TXCOUNT 0xffc03ff8 /* Number of bytes to be written to the endpoint7 Tx FIFO */
#define USB_EP_NI7_RXINTERVAL 0xffc03fe0 /* Sets the polling interval for Interrupt/Isochronous transfers or the NAK response timeout on Bulk transfers for Host Rx endpoint7 */
#define USB_EP_NI7_TXCOUNT 0xffc03fe8 /* Number of bytes to be written to the endpoint7 Tx FIFO */
#define USB_DMA_INTERRUPT 0xffc04000 /* Indicates pending interrupts for the DMA channels */
/* USB Channel 0 Config Registers */

View file

@ -1,52 +0,0 @@
/*
* Copyright 2006-2009 Analog Devices Inc.
*
* Licensed under the GPL-2 or later.
*/
#include <asm/dma.h>
#include <asm/portmux.h>
#ifdef CONFIG_BFIN_UART0_CTSRTS
# define CONFIG_SERIAL_BFIN_CTSRTS
# ifndef CONFIG_UART0_CTS_PIN
# define CONFIG_UART0_CTS_PIN -1
# endif
# ifndef CONFIG_UART0_RTS_PIN
# define CONFIG_UART0_RTS_PIN -1
# endif
#endif
struct bfin_serial_res {
unsigned long uart_base_addr;
int uart_irq;
int uart_status_irq;
#ifdef CONFIG_SERIAL_BFIN_DMA
unsigned int uart_tx_dma_channel;
unsigned int uart_rx_dma_channel;
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
int uart_cts_pin;
int uart_rts_pin;
#endif
};
struct bfin_serial_res bfin_serial_resource[] = {
{
0xFFC00400,
IRQ_UART_RX,
IRQ_UART_ERROR,
#ifdef CONFIG_SERIAL_BFIN_DMA
CH_UART_TX,
CH_UART_RX,
#endif
#ifdef CONFIG_SERIAL_BFIN_CTSRTS
CONFIG_UART0_CTS_PIN,
CONFIG_UART0_RTS_PIN,
#endif
}
};
#define DRIVER_NAME "bfin-uart"
#include <asm/bfin_serial.h>

View file

@ -1754,6 +1754,7 @@ ENTRY(_sys_call_table)
.long _sys_clock_adjtime
.long _sys_syncfs
.long _sys_setns
.long _sys_sendmmsg /* 380 */
.rept NR_syscalls-(.-_sys_call_table)/4
.long _sys_ni_syscall

View file

@ -16,7 +16,7 @@ static int validate_memory_access_address(unsigned long addr, int size)
return bfin_mem_access_type(addr, size);
}
long probe_kernel_read(void *dst, void *src, size_t size)
long probe_kernel_read(void *dst, const void *src, size_t size)
{
unsigned long lsrc = (unsigned long)src;
int mem_type;
@ -55,7 +55,7 @@ long probe_kernel_read(void *dst, void *src, size_t size)
return -EFAULT;
}
long probe_kernel_write(void *dst, void *src, size_t size)
long probe_kernel_write(void *dst, const void *src, size_t size)
{
unsigned long ldst = (unsigned long)dst;
int mem_type;

View file

@ -320,11 +320,12 @@
#define __NR_clock_adjtime 1328
#define __NR_syncfs 1329
#define __NR_setns 1330
#define __NR_sendmmsg 1331
#ifdef __KERNEL__
#define NR_syscalls 307 /* length of syscall table */
#define NR_syscalls 308 /* length of syscall table */
/*
* The following defines stop scripts/checksyscalls.sh from complaining about

View file

@ -1776,6 +1776,7 @@ sys_call_table:
data8 sys_clock_adjtime
data8 sys_syncfs
data8 sys_setns // 1330
data8 sys_sendmmsg
.org sys_call_table + 8*NR_syscalls // guard against failures to increase NR_syscalls
#endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */

View file

@ -715,7 +715,8 @@ static struct syscore_ops pmacpic_syscore_ops = {
static int __init init_pmacpic_syscore(void)
{
register_syscore_ops(&pmacpic_syscore_ops);
if (pmac_irq_hw[0])
register_syscore_ops(&pmacpic_syscore_ops);
return 0;
}

View file

@ -577,16 +577,16 @@ static inline void pgste_set_unlock(pte_t *ptep, pgste_t pgste)
static inline pgste_t pgste_update_all(pte_t *ptep, pgste_t pgste)
{
#ifdef CONFIG_PGSTE
unsigned long pfn, bits;
unsigned long address, bits;
unsigned char skey;
pfn = pte_val(*ptep) >> PAGE_SHIFT;
skey = page_get_storage_key(pfn);
address = pte_val(*ptep) & PAGE_MASK;
skey = page_get_storage_key(address);
bits = skey & (_PAGE_CHANGED | _PAGE_REFERENCED);
/* Clear page changed & referenced bit in the storage key */
if (bits) {
skey ^= bits;
page_set_storage_key(pfn, skey, 1);
page_set_storage_key(address, skey, 1);
}
/* Transfer page changed & referenced bit to guest bits in pgste */
pgste_val(pgste) |= bits << 48; /* RCP_GR_BIT & RCP_GC_BIT */
@ -628,16 +628,16 @@ static inline pgste_t pgste_update_young(pte_t *ptep, pgste_t pgste)
static inline void pgste_set_pte(pte_t *ptep, pgste_t pgste)
{
#ifdef CONFIG_PGSTE
unsigned long pfn;
unsigned long address;
unsigned long okey, nkey;
pfn = pte_val(*ptep) >> PAGE_SHIFT;
okey = nkey = page_get_storage_key(pfn);
address = pte_val(*ptep) & PAGE_MASK;
okey = nkey = page_get_storage_key(address);
nkey &= ~(_PAGE_ACC_BITS | _PAGE_FP_BIT);
/* Set page access key and fetch protection bit from pgste */
nkey |= (pgste_val(pgste) & (RCP_ACC_BITS | RCP_FP_BIT)) >> 56;
if (okey != nkey)
page_set_storage_key(pfn, nkey, 1);
page_set_storage_key(address, nkey, 1);
#endif
}

View file

@ -19,7 +19,7 @@
* using the stura instruction.
* Returns the number of bytes copied or -EFAULT.
*/
static long probe_kernel_write_odd(void *dst, void *src, size_t size)
static long probe_kernel_write_odd(void *dst, const void *src, size_t size)
{
unsigned long count, aligned;
int offset, mask;
@ -45,7 +45,7 @@ static long probe_kernel_write_odd(void *dst, void *src, size_t size)
return rc ? rc : count;
}
long probe_kernel_write(void *dst, void *src, size_t size)
long probe_kernel_write(void *dst, const void *src, size_t size)
{
long copied = 0;

View file

@ -71,12 +71,15 @@ static void rcu_table_freelist_callback(struct rcu_head *head)
void rcu_table_freelist_finish(void)
{
struct rcu_table_freelist *batch = __get_cpu_var(rcu_table_freelist);
struct rcu_table_freelist **batchp = &get_cpu_var(rcu_table_freelist);
struct rcu_table_freelist *batch = *batchp;
if (!batch)
return;
goto out;
call_rcu(&batch->rcu, rcu_table_freelist_callback);
__get_cpu_var(rcu_table_freelist) = NULL;
*batchp = NULL;
out:
put_cpu_var(rcu_table_freelist);
}
static void smp_sync(void *arg)
@ -141,20 +144,23 @@ void crst_table_free_rcu(struct mm_struct *mm, unsigned long *table)
{
struct rcu_table_freelist *batch;
preempt_disable();
if (atomic_read(&mm->mm_users) < 2 &&
cpumask_equal(mm_cpumask(mm), cpumask_of(smp_processor_id()))) {
crst_table_free(mm, table);
return;
goto out;
}
batch = rcu_table_freelist_get(mm);
if (!batch) {
smp_call_function(smp_sync, NULL, 1);
crst_table_free(mm, table);
return;
goto out;
}
batch->table[--batch->crst_index] = table;
if (batch->pgt_index >= batch->crst_index)
rcu_table_freelist_finish();
out:
preempt_enable();
}
#ifdef CONFIG_64BIT
@ -323,16 +329,17 @@ void page_table_free_rcu(struct mm_struct *mm, unsigned long *table)
struct page *page;
unsigned long bits;
preempt_disable();
if (atomic_read(&mm->mm_users) < 2 &&
cpumask_equal(mm_cpumask(mm), cpumask_of(smp_processor_id()))) {
page_table_free(mm, table);
return;
goto out;
}
batch = rcu_table_freelist_get(mm);
if (!batch) {
smp_call_function(smp_sync, NULL, 1);
page_table_free(mm, table);
return;
goto out;
}
bits = (mm->context.has_pgste) ? 3UL : 1UL;
bits <<= (__pa(table) & (PAGE_SIZE - 1)) / 256 / sizeof(unsigned long);
@ -345,6 +352,8 @@ void page_table_free_rcu(struct mm_struct *mm, unsigned long *table)
batch->table[batch->pgt_index++] = table;
if (batch->pgt_index >= batch->crst_index)
rcu_table_freelist_finish();
out:
preempt_enable();
}
/*

View file

@ -161,7 +161,7 @@ config ARCH_HAS_CPU_IDLE_WAIT
config NO_IOPORT
def_bool !PCI
depends on !SH_CAYMAN && !SH_SH4202_MICRODEV
depends on !SH_CAYMAN && !SH_SH4202_MICRODEV && !SH_SHMIN
config IO_TRAPPED
bool

View file

@ -359,37 +359,31 @@ static struct soc_camera_link camera_link = {
.priv = &camera_info,
};
static void dummy_release(struct device *dev)
{
}
static struct platform_device *camera_device;
static struct platform_device camera_device = {
.name = "soc_camera_platform",
.dev = {
.platform_data = &camera_info,
.release = dummy_release,
},
};
static void ap325rxa_camera_release(struct device *dev)
{
soc_camera_platform_release(&camera_device);
}
static int ap325rxa_camera_add(struct soc_camera_link *icl,
struct device *dev)
{
if (icl != &camera_link || camera_probe() <= 0)
return -ENODEV;
int ret = soc_camera_platform_add(icl, dev, &camera_device, &camera_link,
ap325rxa_camera_release, 0);
if (ret < 0)
return ret;
camera_info.dev = dev;
ret = camera_probe();
if (ret < 0)
soc_camera_platform_del(icl, camera_device, &camera_link);
return platform_device_register(&camera_device);
return ret;
}
static void ap325rxa_camera_del(struct soc_camera_link *icl)
{
if (icl != &camera_link)
return;
platform_device_unregister(&camera_device);
memset(&camera_device.dev.kobj, 0,
sizeof(camera_device.dev.kobj));
soc_camera_platform_del(icl, camera_device, &camera_link);
}
#endif /* CONFIG_I2C */

View file

@ -885,6 +885,9 @@ static struct platform_device sh_mmcif_device = {
},
.num_resources = ARRAY_SIZE(sh_mmcif_resources),
.resource = sh_mmcif_resources,
.archdata = {
.hwblk_id = HWBLK_MMC,
},
};
#endif

View file

@ -18,6 +18,7 @@
#include <asm/pgtable-2level.h>
#endif
#include <asm/page.h>
#include <asm/mmu.h>
#ifndef __ASSEMBLY__
#include <asm/addrspace.h>

View file

@ -41,7 +41,9 @@
#define user_mode(regs) (((regs)->sr & 0x40000000)==0)
#define kernel_stack_pointer(_regs) ((unsigned long)(_regs)->regs[15])
#define GET_USP(regs) ((regs)->regs[15])
#define GET_FP(regs) ((regs)->regs[14])
#define GET_USP(regs) ((regs)->regs[15])
extern void show_regs(struct pt_regs *);
@ -131,7 +133,7 @@ extern void ptrace_triggered(struct perf_event *bp, int nmi,
static inline unsigned long profile_pc(struct pt_regs *regs)
{
unsigned long pc = instruction_pointer(regs);
unsigned long pc = regs->pc;
if (virt_addr_uncached(pc))
return CAC_ADDR(pc);

View file

@ -9,6 +9,7 @@
#include <linux/pagemap.h>
#ifdef CONFIG_MMU
#include <linux/swap.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>

View file

@ -236,6 +236,7 @@ enum {
};
enum {
SHDMA_SLAVE_INVALID,
SHDMA_SLAVE_SCIF0_TX,
SHDMA_SLAVE_SCIF0_RX,
SHDMA_SLAVE_SCIF1_TX,

View file

@ -285,6 +285,7 @@ enum {
};
enum {
SHDMA_SLAVE_INVALID,
SHDMA_SLAVE_SCIF0_TX,
SHDMA_SLAVE_SCIF0_RX,
SHDMA_SLAVE_SCIF1_TX,

View file

@ -252,6 +252,7 @@ enum {
};
enum {
SHDMA_SLAVE_INVALID,
SHDMA_SLAVE_SDHI_TX,
SHDMA_SLAVE_SDHI_RX,
SHDMA_SLAVE_MMCIF_TX,

View file

@ -21,6 +21,7 @@
#include <linux/fs.h>
#include <linux/ftrace.h>
#include <linux/hw_breakpoint.h>
#include <linux/prefetch.h>
#include <asm/uaccess.h>
#include <asm/mmu_context.h>
#include <asm/system.h>

View file

@ -82,7 +82,7 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
void *addr;
addr = __in_29bit_mode() ?
(void *)P1SEGADDR((unsigned long)vaddr) : vaddr;
(void *)CAC_ADDR((unsigned long)vaddr) : vaddr;
switch (direction) {
case DMA_FROM_DEVICE: /* invalidate only */

View file

@ -11,6 +11,7 @@ config TILE
select GENERIC_IRQ_PROBE
select GENERIC_PENDING_IRQ if SMP
select GENERIC_IRQ_SHOW
select SYS_HYPERVISOR
# FIXME: investigate whether we need/want these options.
# select HAVE_IOREMAP_PROT

View file

@ -40,6 +40,10 @@
#define HARDWALL_DEACTIVATE \
_IO(HARDWALL_IOCTL_BASE, _HARDWALL_DEACTIVATE)
#define _HARDWALL_GET_ID 4
#define HARDWALL_GET_ID \
_IO(HARDWALL_IOCTL_BASE, _HARDWALL_GET_ID)
#ifndef __KERNEL__
/* This is the canonical name expected by userspace. */
@ -47,9 +51,14 @@
#else
/* Hook for /proc/tile/hardwall. */
struct seq_file;
int proc_tile_hardwall_show(struct seq_file *sf, void *v);
/* /proc hooks for hardwall. */
struct proc_dir_entry;
#ifdef CONFIG_HARDWALL
void proc_tile_hardwall_init(struct proc_dir_entry *root);
int proc_pid_hardwall(struct task_struct *task, char *buffer);
#else
static inline void proc_tile_hardwall_init(struct proc_dir_entry *root) {}
#endif
#endif

View file

@ -5,7 +5,7 @@
extra-y := vmlinux.lds head_$(BITS).o
obj-y := backtrace.o entry.o init_task.o irq.o messaging.o \
pci-dma.o proc.o process.o ptrace.o reboot.o \
setup.o signal.o single_step.o stack.o sys.o time.o traps.o \
setup.o signal.o single_step.o stack.o sys.o sysfs.o time.o traps.o \
intvec_$(BITS).o regs_$(BITS).o tile-desc_$(BITS).o
obj-$(CONFIG_HARDWALL) += hardwall.o

View file

@ -40,16 +40,25 @@
struct hardwall_info {
struct list_head list; /* "rectangles" list */
struct list_head task_head; /* head of tasks in this hardwall */
struct cpumask cpumask; /* cpus in the rectangle */
int ulhc_x; /* upper left hand corner x coord */
int ulhc_y; /* upper left hand corner y coord */
int width; /* rectangle width */
int height; /* rectangle height */
int id; /* integer id for this hardwall */
int teardown_in_progress; /* are we tearing this one down? */
};
/* Currently allocated hardwall rectangles */
static LIST_HEAD(rectangles);
/* /proc/tile/hardwall */
static struct proc_dir_entry *hardwall_proc_dir;
/* Functions to manage files in /proc/tile/hardwall. */
static void hardwall_add_proc(struct hardwall_info *rect);
static void hardwall_remove_proc(struct hardwall_info *rect);
/*
* Guard changes to the hardwall data structures.
* This could be finer grained (e.g. one lock for the list of hardwall
@ -105,6 +114,8 @@ static int setup_rectangle(struct hardwall_info *r, struct cpumask *mask)
r->ulhc_y = cpu_y(ulhc);
r->width = cpu_x(lrhc) - r->ulhc_x + 1;
r->height = cpu_y(lrhc) - r->ulhc_y + 1;
cpumask_copy(&r->cpumask, mask);
r->id = ulhc; /* The ulhc cpu id can be the hardwall id. */
/* Width and height must be positive */
if (r->width <= 0 || r->height <= 0)
@ -388,6 +399,9 @@ static struct hardwall_info *hardwall_create(
/* Set up appropriate hardwalling on all affected cpus. */
hardwall_setup(rect);
/* Create a /proc/tile/hardwall entry. */
hardwall_add_proc(rect);
return rect;
}
@ -645,6 +659,9 @@ static void hardwall_destroy(struct hardwall_info *rect)
/* Restart switch and disable firewall. */
on_each_cpu_mask(&mask, restart_udn_switch, NULL, 1);
/* Remove the /proc/tile/hardwall entry. */
hardwall_remove_proc(rect);
/* Now free the rectangle from the list. */
spin_lock_irqsave(&hardwall_lock, flags);
BUG_ON(!list_empty(&rect->task_head));
@ -654,35 +671,57 @@ static void hardwall_destroy(struct hardwall_info *rect)
}
/*
* Dump hardwall state via /proc; initialized in arch/tile/sys/proc.c.
*/
int proc_tile_hardwall_show(struct seq_file *sf, void *v)
static int hardwall_proc_show(struct seq_file *sf, void *v)
{
struct hardwall_info *r;
struct hardwall_info *rect = sf->private;
char buf[256];
if (udn_disabled) {
seq_printf(sf, "%dx%d 0,0 pids:\n", smp_width, smp_height);
return 0;
}
spin_lock_irq(&hardwall_lock);
list_for_each_entry(r, &rectangles, list) {
struct task_struct *p;
seq_printf(sf, "%dx%d %d,%d pids:",
r->width, r->height, r->ulhc_x, r->ulhc_y);
list_for_each_entry(p, &r->task_head, thread.hardwall_list) {
unsigned int cpu = cpumask_first(&p->cpus_allowed);
unsigned int x = cpu % smp_width;
unsigned int y = cpu / smp_width;
seq_printf(sf, " %d@%d,%d", p->pid, x, y);
}
seq_printf(sf, "\n");
}
spin_unlock_irq(&hardwall_lock);
int rc = cpulist_scnprintf(buf, sizeof(buf), &rect->cpumask);
buf[rc++] = '\n';
seq_write(sf, buf, rc);
return 0;
}
static int hardwall_proc_open(struct inode *inode,
struct file *file)
{
return single_open(file, hardwall_proc_show, PDE(inode)->data);
}
static const struct file_operations hardwall_proc_fops = {
.open = hardwall_proc_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static void hardwall_add_proc(struct hardwall_info *rect)
{
char buf[64];
snprintf(buf, sizeof(buf), "%d", rect->id);
proc_create_data(buf, 0444, hardwall_proc_dir,
&hardwall_proc_fops, rect);
}
static void hardwall_remove_proc(struct hardwall_info *rect)
{
char buf[64];
snprintf(buf, sizeof(buf), "%d", rect->id);
remove_proc_entry(buf, hardwall_proc_dir);
}
int proc_pid_hardwall(struct task_struct *task, char *buffer)
{
struct hardwall_info *rect = task->thread.hardwall;
return rect ? sprintf(buffer, "%d\n", rect->id) : 0;
}
void proc_tile_hardwall_init(struct proc_dir_entry *root)
{
if (!udn_disabled)
hardwall_proc_dir = proc_mkdir("hardwall", root);
}
/*
* Character device support via ioctl/close.
@ -716,6 +755,9 @@ static long hardwall_ioctl(struct file *file, unsigned int a, unsigned long b)
return -EINVAL;
return hardwall_deactivate(current);
case _HARDWALL_GET_ID:
return rect ? rect->id : -EINVAL;
default:
return -EINVAL;
}

View file

@ -27,6 +27,7 @@
#include <asm/processor.h>
#include <asm/sections.h>
#include <asm/homecache.h>
#include <asm/hardwall.h>
#include <arch/chip.h>
@ -88,3 +89,75 @@ const struct seq_operations cpuinfo_op = {
.stop = c_stop,
.show = show_cpuinfo,
};
/*
* Support /proc/tile directory
*/
static int __init proc_tile_init(void)
{
struct proc_dir_entry *root = proc_mkdir("tile", NULL);
if (root == NULL)
return 0;
proc_tile_hardwall_init(root);
return 0;
}
arch_initcall(proc_tile_init);
/*
* Support /proc/sys/tile directory
*/
#ifndef __tilegx__ /* FIXME: GX: no support for unaligned access yet */
static ctl_table unaligned_subtable[] = {
{
.procname = "enabled",
.data = &unaligned_fixup,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &proc_dointvec
},
{
.procname = "printk",
.data = &unaligned_printk,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &proc_dointvec
},
{
.procname = "count",
.data = &unaligned_fixup_count,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &proc_dointvec
},
{}
};
static ctl_table unaligned_table[] = {
{
.procname = "unaligned_fixup",
.mode = 0555,
.child = unaligned_subtable
},
{}
};
#endif
static struct ctl_path tile_path[] = {
{ .procname = "tile" },
{ }
};
static int __init proc_sys_tile_init(void)
{
#ifndef __tilegx__ /* FIXME: GX: no support for unaligned access yet */
register_sysctl_paths(tile_path, unaligned_table);
#endif
return 0;
}
arch_initcall(proc_sys_tile_init);

185
arch/tile/kernel/sysfs.c Normal file
View file

@ -0,0 +1,185 @@
/*
* Copyright 2011 Tilera Corporation. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation, version 2.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for
* more details.
*
* /sys entry support.
*/
#include <linux/sysdev.h>
#include <linux/cpu.h>
#include <linux/slab.h>
#include <linux/smp.h>
#include <hv/hypervisor.h>
/* Return a string queried from the hypervisor, truncated to page size. */
static ssize_t get_hv_confstr(char *page, int query)
{
ssize_t n = hv_confstr(query, (unsigned long)page, PAGE_SIZE - 1);
n = n < 0 ? 0 : min(n, (ssize_t)PAGE_SIZE - 1) - 1;
if (n)
page[n++] = '\n';
page[n] = '\0';
return n;
}
static ssize_t chip_width_show(struct sysdev_class *dev,
struct sysdev_class_attribute *attr,
char *page)
{
return sprintf(page, "%u\n", smp_width);
}
static SYSDEV_CLASS_ATTR(chip_width, 0444, chip_width_show, NULL);
static ssize_t chip_height_show(struct sysdev_class *dev,
struct sysdev_class_attribute *attr,
char *page)
{
return sprintf(page, "%u\n", smp_height);
}
static SYSDEV_CLASS_ATTR(chip_height, 0444, chip_height_show, NULL);
static ssize_t chip_serial_show(struct sysdev_class *dev,
struct sysdev_class_attribute *attr,
char *page)
{
return get_hv_confstr(page, HV_CONFSTR_CHIP_SERIAL_NUM);
}
static SYSDEV_CLASS_ATTR(chip_serial, 0444, chip_serial_show, NULL);
static ssize_t chip_revision_show(struct sysdev_class *dev,
struct sysdev_class_attribute *attr,
char *page)
{
return get_hv_confstr(page, HV_CONFSTR_CHIP_REV);
}
static SYSDEV_CLASS_ATTR(chip_revision, 0444, chip_revision_show, NULL);
static ssize_t type_show(struct sysdev_class *dev,
struct sysdev_class_attribute *attr,
char *page)
{
return sprintf(page, "tilera\n");
}
static SYSDEV_CLASS_ATTR(type, 0444, type_show, NULL);
#define HV_CONF_ATTR(name, conf) \
static ssize_t name ## _show(struct sysdev_class *dev, \
struct sysdev_class_attribute *attr, \
char *page) \
{ \
return get_hv_confstr(page, conf); \
} \
static SYSDEV_CLASS_ATTR(name, 0444, name ## _show, NULL);
HV_CONF_ATTR(version, HV_CONFSTR_HV_SW_VER)
HV_CONF_ATTR(config_version, HV_CONFSTR_HV_CONFIG_VER)
HV_CONF_ATTR(board_part, HV_CONFSTR_BOARD_PART_NUM)
HV_CONF_ATTR(board_serial, HV_CONFSTR_BOARD_SERIAL_NUM)
HV_CONF_ATTR(board_revision, HV_CONFSTR_BOARD_REV)
HV_CONF_ATTR(board_description, HV_CONFSTR_BOARD_DESC)
HV_CONF_ATTR(mezz_part, HV_CONFSTR_MEZZ_PART_NUM)
HV_CONF_ATTR(mezz_serial, HV_CONFSTR_MEZZ_SERIAL_NUM)
HV_CONF_ATTR(mezz_revision, HV_CONFSTR_MEZZ_REV)
HV_CONF_ATTR(mezz_description, HV_CONFSTR_MEZZ_DESC)
HV_CONF_ATTR(switch_control, HV_CONFSTR_SWITCH_CONTROL)
static struct attribute *board_attrs[] = {
&attr_board_part.attr,
&attr_board_serial.attr,
&attr_board_revision.attr,
&attr_board_description.attr,
&attr_mezz_part.attr,
&attr_mezz_serial.attr,
&attr_mezz_revision.attr,
&attr_mezz_description.attr,
&attr_switch_control.attr,
NULL
};
static struct attribute_group board_attr_group = {
.name = "board",
.attrs = board_attrs,
};
static struct bin_attribute hvconfig_bin;
static ssize_t
hvconfig_bin_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t off, size_t count)
{
static size_t size;
/* Lazily learn the true size (minus the trailing NUL). */
if (size == 0)
size = hv_confstr(HV_CONFSTR_HV_CONFIG, 0, 0) - 1;
/* Check and adjust input parameters. */
if (off > size)
return -EINVAL;
if (count > size - off)
count = size - off;
if (count) {
/* Get a copy of the hvc and copy out the relevant portion. */
char *hvc;
size = off + count;
hvc = kmalloc(size, GFP_KERNEL);
if (hvc == NULL)
return -ENOMEM;
hv_confstr(HV_CONFSTR_HV_CONFIG, (unsigned long)hvc, size);
memcpy(buf, hvc + off, count);
kfree(hvc);
}
return count;
}
static int __init create_sysfs_entries(void)
{
struct sysdev_class *cls = &cpu_sysdev_class;
int err = 0;
#define create_cpu_attr(name) \
if (!err) \
err = sysfs_create_file(&cls->kset.kobj, &attr_##name.attr);
create_cpu_attr(chip_width);
create_cpu_attr(chip_height);
create_cpu_attr(chip_serial);
create_cpu_attr(chip_revision);
#define create_hv_attr(name) \
if (!err) \
err = sysfs_create_file(hypervisor_kobj, &attr_##name.attr);
create_hv_attr(type);
create_hv_attr(version);
create_hv_attr(config_version);
if (!err)
err = sysfs_create_group(hypervisor_kobj, &board_attr_group);
if (!err) {
sysfs_bin_attr_init(&hvconfig_bin);
hvconfig_bin.attr.name = "hvconfig";
hvconfig_bin.attr.mode = S_IRUGO;
hvconfig_bin.read = hvconfig_bin_read;
hvconfig_bin.size = PAGE_SIZE;
err = sysfs_create_bin_file(hypervisor_kobj, &hvconfig_bin);
}
return err;
}
subsys_initcall(create_sysfs_entries);

View file

@ -139,7 +139,7 @@ static inline unsigned int acpi_processor_cstate_check(unsigned int max_cstate)
boot_cpu_data.x86_model <= 0x05 &&
boot_cpu_data.x86_mask < 0x0A)
return 1;
else if (c1e_detected)
else if (amd_e400_c1e_detected)
return 1;
else
return max_cstate;

View file

@ -125,7 +125,7 @@
#define X86_FEATURE_OSXSAVE (4*32+27) /* "" XSAVE enabled in the OS */
#define X86_FEATURE_AVX (4*32+28) /* Advanced Vector Extensions */
#define X86_FEATURE_F16C (4*32+29) /* 16-bit fp conversions */
#define X86_FEATURE_RDRND (4*32+30) /* The RDRAND instruction */
#define X86_FEATURE_RDRAND (4*32+30) /* The RDRAND instruction */
#define X86_FEATURE_HYPERVISOR (4*32+31) /* Running on a hypervisor */
/* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001, word 5 */

View file

@ -4,30 +4,33 @@
#include <asm/desc_defs.h>
#include <asm/ldt.h>
#include <asm/mmu.h>
#include <linux/smp.h>
static inline void fill_ldt(struct desc_struct *desc,
const struct user_desc *info)
static inline void fill_ldt(struct desc_struct *desc, const struct user_desc *info)
{
desc->limit0 = info->limit & 0x0ffff;
desc->base0 = info->base_addr & 0x0000ffff;
desc->limit0 = info->limit & 0x0ffff;
desc->base1 = (info->base_addr & 0x00ff0000) >> 16;
desc->type = (info->read_exec_only ^ 1) << 1;
desc->type |= info->contents << 2;
desc->s = 1;
desc->dpl = 0x3;
desc->p = info->seg_not_present ^ 1;
desc->limit = (info->limit & 0xf0000) >> 16;
desc->avl = info->useable;
desc->d = info->seg_32bit;
desc->g = info->limit_in_pages;
desc->base2 = (info->base_addr & 0xff000000) >> 24;
desc->base0 = (info->base_addr & 0x0000ffff);
desc->base1 = (info->base_addr & 0x00ff0000) >> 16;
desc->type = (info->read_exec_only ^ 1) << 1;
desc->type |= info->contents << 2;
desc->s = 1;
desc->dpl = 0x3;
desc->p = info->seg_not_present ^ 1;
desc->limit = (info->limit & 0xf0000) >> 16;
desc->avl = info->useable;
desc->d = info->seg_32bit;
desc->g = info->limit_in_pages;
desc->base2 = (info->base_addr & 0xff000000) >> 24;
/*
* Don't allow setting of the lm bit. It is useless anyway
* because 64bit system calls require __USER_CS:
*/
desc->l = 0;
desc->l = 0;
}
extern struct desc_ptr idt_descr;
@ -36,6 +39,7 @@ extern gate_desc idt_table[];
struct gdt_page {
struct desc_struct gdt[GDT_ENTRIES];
} __attribute__((aligned(PAGE_SIZE)));
DECLARE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page);
static inline struct desc_struct *get_cpu_gdt_table(unsigned int cpu)
@ -48,16 +52,16 @@ static inline struct desc_struct *get_cpu_gdt_table(unsigned int cpu)
static inline void pack_gate(gate_desc *gate, unsigned type, unsigned long func,
unsigned dpl, unsigned ist, unsigned seg)
{
gate->offset_low = PTR_LOW(func);
gate->segment = __KERNEL_CS;
gate->ist = ist;
gate->p = 1;
gate->dpl = dpl;
gate->zero0 = 0;
gate->zero1 = 0;
gate->type = type;
gate->offset_middle = PTR_MIDDLE(func);
gate->offset_high = PTR_HIGH(func);
gate->offset_low = PTR_LOW(func);
gate->segment = __KERNEL_CS;
gate->ist = ist;
gate->p = 1;
gate->dpl = dpl;
gate->zero0 = 0;
gate->zero1 = 0;
gate->type = type;
gate->offset_middle = PTR_MIDDLE(func);
gate->offset_high = PTR_HIGH(func);
}
#else
@ -66,8 +70,7 @@ static inline void pack_gate(gate_desc *gate, unsigned char type,
unsigned short seg)
{
gate->a = (seg << 16) | (base & 0xffff);
gate->b = (base & 0xffff0000) |
(((0x80 | type | (dpl << 5)) & 0xff) << 8);
gate->b = (base & 0xffff0000) | (((0x80 | type | (dpl << 5)) & 0xff) << 8);
}
#endif
@ -75,31 +78,29 @@ static inline void pack_gate(gate_desc *gate, unsigned char type,
static inline int desc_empty(const void *ptr)
{
const u32 *desc = ptr;
return !(desc[0] | desc[1]);
}
#ifdef CONFIG_PARAVIRT
#include <asm/paravirt.h>
#else
#define load_TR_desc() native_load_tr_desc()
#define load_gdt(dtr) native_load_gdt(dtr)
#define load_idt(dtr) native_load_idt(dtr)
#define load_tr(tr) asm volatile("ltr %0"::"m" (tr))
#define load_ldt(ldt) asm volatile("lldt %0"::"m" (ldt))
#define load_TR_desc() native_load_tr_desc()
#define load_gdt(dtr) native_load_gdt(dtr)
#define load_idt(dtr) native_load_idt(dtr)
#define load_tr(tr) asm volatile("ltr %0"::"m" (tr))
#define load_ldt(ldt) asm volatile("lldt %0"::"m" (ldt))
#define store_gdt(dtr) native_store_gdt(dtr)
#define store_idt(dtr) native_store_idt(dtr)
#define store_tr(tr) (tr = native_store_tr())
#define store_gdt(dtr) native_store_gdt(dtr)
#define store_idt(dtr) native_store_idt(dtr)
#define store_tr(tr) (tr = native_store_tr())
#define load_TLS(t, cpu) native_load_tls(t, cpu)
#define set_ldt native_set_ldt
#define load_TLS(t, cpu) native_load_tls(t, cpu)
#define set_ldt native_set_ldt
#define write_ldt_entry(dt, entry, desc) \
native_write_ldt_entry(dt, entry, desc)
#define write_gdt_entry(dt, entry, desc, type) \
native_write_gdt_entry(dt, entry, desc, type)
#define write_idt_entry(dt, entry, g) \
native_write_idt_entry(dt, entry, g)
#define write_ldt_entry(dt, entry, desc) native_write_ldt_entry(dt, entry, desc)
#define write_gdt_entry(dt, entry, desc, type) native_write_gdt_entry(dt, entry, desc, type)
#define write_idt_entry(dt, entry, g) native_write_idt_entry(dt, entry, g)
static inline void paravirt_alloc_ldt(struct desc_struct *ldt, unsigned entries)
{
@ -112,33 +113,27 @@ static inline void paravirt_free_ldt(struct desc_struct *ldt, unsigned entries)
#define store_ldt(ldt) asm("sldt %0" : "=m"(ldt))
static inline void native_write_idt_entry(gate_desc *idt, int entry,
const gate_desc *gate)
static inline void native_write_idt_entry(gate_desc *idt, int entry, const gate_desc *gate)
{
memcpy(&idt[entry], gate, sizeof(*gate));
}
static inline void native_write_ldt_entry(struct desc_struct *ldt, int entry,
const void *desc)
static inline void native_write_ldt_entry(struct desc_struct *ldt, int entry, const void *desc)
{
memcpy(&ldt[entry], desc, 8);
}
static inline void native_write_gdt_entry(struct desc_struct *gdt, int entry,
const void *desc, int type)
static inline void
native_write_gdt_entry(struct desc_struct *gdt, int entry, const void *desc, int type)
{
unsigned int size;
switch (type) {
case DESC_TSS:
size = sizeof(tss_desc);
break;
case DESC_LDT:
size = sizeof(ldt_desc);
break;
default:
size = sizeof(struct desc_struct);
break;
case DESC_TSS: size = sizeof(tss_desc); break;
case DESC_LDT: size = sizeof(ldt_desc); break;
default: size = sizeof(*gdt); break;
}
memcpy(&gdt[entry], desc, size);
}
@ -154,20 +149,21 @@ static inline void pack_descriptor(struct desc_struct *desc, unsigned long base,
}
static inline void set_tssldt_descriptor(void *d, unsigned long addr,
unsigned type, unsigned size)
static inline void set_tssldt_descriptor(void *d, unsigned long addr, unsigned type, unsigned size)
{
#ifdef CONFIG_X86_64
struct ldttss_desc64 *desc = d;
memset(desc, 0, sizeof(*desc));
desc->limit0 = size & 0xFFFF;
desc->base0 = PTR_LOW(addr);
desc->base1 = PTR_MIDDLE(addr) & 0xFF;
desc->type = type;
desc->p = 1;
desc->limit1 = (size >> 16) & 0xF;
desc->base2 = (PTR_MIDDLE(addr) >> 8) & 0xFF;
desc->base3 = PTR_HIGH(addr);
desc->limit0 = size & 0xFFFF;
desc->base0 = PTR_LOW(addr);
desc->base1 = PTR_MIDDLE(addr) & 0xFF;
desc->type = type;
desc->p = 1;
desc->limit1 = (size >> 16) & 0xF;
desc->base2 = (PTR_MIDDLE(addr) >> 8) & 0xFF;
desc->base3 = PTR_HIGH(addr);
#else
pack_descriptor((struct desc_struct *)d, addr, size, 0x80 | type, 0);
#endif
@ -237,14 +233,16 @@ static inline void native_store_idt(struct desc_ptr *dtr)
static inline unsigned long native_store_tr(void)
{
unsigned long tr;
asm volatile("str %0":"=r" (tr));
return tr;
}
static inline void native_load_tls(struct thread_struct *t, unsigned int cpu)
{
unsigned int i;
struct desc_struct *gdt = get_cpu_gdt_table(cpu);
unsigned int i;
for (i = 0; i < GDT_ENTRY_TLS_ENTRIES; i++)
gdt[GDT_ENTRY_TLS_MIN + i] = t->tls_array[i];
@ -313,6 +311,7 @@ static inline void _set_gate(int gate, unsigned type, void *addr,
unsigned dpl, unsigned ist, unsigned seg)
{
gate_desc s;
pack_gate(&s, type, (unsigned long)addr, dpl, ist, seg);
/*
* does not need to be atomic because it is only done once at
@ -343,8 +342,9 @@ static inline void alloc_system_vector(int vector)
set_bit(vector, used_vectors);
if (first_system_vector > vector)
first_system_vector = vector;
} else
} else {
BUG();
}
}
static inline void alloc_intr_gate(unsigned int n, void *addr)

View file

@ -16,6 +16,6 @@ static inline void enter_idle(void) { }
static inline void exit_idle(void) { }
#endif /* CONFIG_X86_64 */
void c1e_remove_cpu(int cpu);
void amd_e400_remove_cpu(int cpu);
#endif /* _ASM_X86_IDLE_H */

View file

@ -11,14 +11,14 @@
typedef struct {
void *ldt;
int size;
struct mutex lock;
void *vdso;
#ifdef CONFIG_X86_64
/* True if mm supports a task running in 32 bit compatibility mode. */
unsigned short ia32_compat;
#endif
struct mutex lock;
void *vdso;
} mm_context_t;
#ifdef CONFIG_SMP

View file

@ -754,10 +754,10 @@ static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
extern void mwait_idle_with_hints(unsigned long eax, unsigned long ecx);
extern void select_idle_routine(const struct cpuinfo_x86 *c);
extern void init_c1e_mask(void);
extern void init_amd_e400_c1e_mask(void);
extern unsigned long boot_option_idle_override;
extern bool c1e_detected;
extern bool amd_e400_c1e_detected;
enum idle_boot_override {IDLE_NO_OVERRIDE=0, IDLE_HALT, IDLE_NOMWAIT,
IDLE_POLL, IDLE_FORCE_MWAIT};

View file

@ -5,7 +5,7 @@
*
* SGI UV Broadcast Assist Unit definitions
*
* Copyright (C) 2008 Silicon Graphics, Inc. All rights reserved.
* Copyright (C) 2008-2011 Silicon Graphics, Inc. All rights reserved.
*/
#ifndef _ASM_X86_UV_UV_BAU_H
@ -35,17 +35,20 @@
#define MAX_CPUS_PER_UVHUB 64
#define MAX_CPUS_PER_SOCKET 32
#define UV_ADP_SIZE 64 /* hardware-provided max. */
#define UV_CPUS_PER_ACT_STATUS 32 /* hardware-provided max. */
#define UV_ITEMS_PER_DESCRIPTOR 8
#define ADP_SZ 64 /* hardware-provided max. */
#define UV_CPUS_PER_AS 32 /* hardware-provided max. */
#define ITEMS_PER_DESC 8
/* the 'throttle' to prevent the hardware stay-busy bug */
#define MAX_BAU_CONCURRENT 3
#define UV_ACT_STATUS_MASK 0x3
#define UV_ACT_STATUS_SIZE 2
#define UV_DISTRIBUTION_SIZE 256
#define UV_SW_ACK_NPENDING 8
#define UV_NET_ENDPOINT_INTD 0x38
#define UV_DESC_BASE_PNODE_SHIFT 49
#define UV1_NET_ENDPOINT_INTD 0x38
#define UV2_NET_ENDPOINT_INTD 0x28
#define UV_NET_ENDPOINT_INTD (is_uv1_hub() ? \
UV1_NET_ENDPOINT_INTD : UV2_NET_ENDPOINT_INTD)
#define UV_DESC_PSHIFT 49
#define UV_PAYLOADQ_PNODE_SHIFT 49
#define UV_PTC_BASENAME "sgi_uv/ptc_statistics"
#define UV_BAU_BASENAME "sgi_uv/bau_tunables"
@ -53,29 +56,64 @@
#define UV_BAU_TUNABLES_FILE "bau_tunables"
#define WHITESPACE " \t\n"
#define uv_physnodeaddr(x) ((__pa((unsigned long)(x)) & uv_mmask))
#define UV_ENABLE_INTD_SOFT_ACK_MODE_SHIFT 15
#define UV_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHIFT 16
#define UV_INTD_SOFT_ACK_TIMEOUT_PERIOD 0x0000000009UL
#define cpubit_isset(cpu, bau_local_cpumask) \
test_bit((cpu), (bau_local_cpumask).bits)
/* [19:16] SOFT_ACK timeout period 19: 1 is urgency 7 17:16 1 is multiplier */
#define BAU_MISC_CONTROL_MULT_MASK 3
/*
* UV2: Bit 19 selects between
* (0): 10 microsecond timebase and
* (1): 80 microseconds
* we're using 655us, similar to UV1: 65 units of 10us
*/
#define UV1_INTD_SOFT_ACK_TIMEOUT_PERIOD (9UL)
#define UV2_INTD_SOFT_ACK_TIMEOUT_PERIOD (65*10UL)
#define UVH_AGING_PRESCALE_SEL 0x000000b000UL
#define UV_INTD_SOFT_ACK_TIMEOUT_PERIOD (is_uv1_hub() ? \
UV1_INTD_SOFT_ACK_TIMEOUT_PERIOD : \
UV2_INTD_SOFT_ACK_TIMEOUT_PERIOD)
#define BAU_MISC_CONTROL_MULT_MASK 3
#define UVH_AGING_PRESCALE_SEL 0x000000b000UL
/* [30:28] URGENCY_7 an index into a table of times */
#define BAU_URGENCY_7_SHIFT 28
#define BAU_URGENCY_7_MASK 7
#define BAU_URGENCY_7_SHIFT 28
#define BAU_URGENCY_7_MASK 7
#define UVH_TRANSACTION_TIMEOUT 0x000000b200UL
#define UVH_TRANSACTION_TIMEOUT 0x000000b200UL
/* [45:40] BAU - BAU transaction timeout select - a multiplier */
#define BAU_TRANS_SHIFT 40
#define BAU_TRANS_MASK 0x3f
#define BAU_TRANS_SHIFT 40
#define BAU_TRANS_MASK 0x3f
/*
* shorten some awkward names
*/
#define AS_PUSH_SHIFT UVH_LB_BAU_SB_ACTIVATION_CONTROL_PUSH_SHFT
#define SOFTACK_MSHIFT UVH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT
#define SOFTACK_PSHIFT UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT
#define SOFTACK_TIMEOUT_PERIOD UV_INTD_SOFT_ACK_TIMEOUT_PERIOD
#define write_gmmr uv_write_global_mmr64
#define write_lmmr uv_write_local_mmr
#define read_lmmr uv_read_local_mmr
#define read_gmmr uv_read_global_mmr64
/*
* bits in UVH_LB_BAU_SB_ACTIVATION_STATUS_0/1
*/
#define DESC_STATUS_IDLE 0
#define DESC_STATUS_ACTIVE 1
#define DESC_STATUS_DESTINATION_TIMEOUT 2
#define DESC_STATUS_SOURCE_TIMEOUT 3
#define DS_IDLE 0
#define DS_ACTIVE 1
#define DS_DESTINATION_TIMEOUT 2
#define DS_SOURCE_TIMEOUT 3
/*
* bits put together from HRP_LB_BAU_SB_ACTIVATION_STATUS_0/1/2
* values 1 and 5 will not occur
*/
#define UV2H_DESC_IDLE 0
#define UV2H_DESC_DEST_TIMEOUT 2
#define UV2H_DESC_DEST_STRONG_NACK 3
#define UV2H_DESC_BUSY 4
#define UV2H_DESC_SOURCE_TIMEOUT 6
#define UV2H_DESC_DEST_PUT_ERR 7
/*
* delay for 'plugged' timeout retries, in microseconds
@ -86,15 +124,24 @@
* threshholds at which to use IPI to free resources
*/
/* after this # consecutive 'plugged' timeouts, use IPI to release resources */
#define PLUGSB4RESET 100
#define PLUGSB4RESET 100
/* after this many consecutive timeouts, use IPI to release resources */
#define TIMEOUTSB4RESET 1
#define TIMEOUTSB4RESET 1
/* at this number uses of IPI to release resources, giveup the request */
#define IPI_RESET_LIMIT 1
#define IPI_RESET_LIMIT 1
/* after this # consecutive successes, bump up the throttle if it was lowered */
#define COMPLETE_THRESHOLD 5
#define COMPLETE_THRESHOLD 5
#define UV_LB_SUBNODEID 0x10
#define UV_LB_SUBNODEID 0x10
/* these two are the same for UV1 and UV2: */
#define UV_SA_SHFT UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT
#define UV_SA_MASK UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK
/* 4 bits of software ack period */
#define UV2_ACK_MASK 0x7UL
#define UV2_ACK_UNITS_SHFT 3
#define UV2_LEG_SHFT UV2H_LB_BAU_MISC_CONTROL_USE_LEGACY_DESCRIPTOR_FORMATS_SHFT
#define UV2_EXT_SHFT UV2H_LB_BAU_MISC_CONTROL_ENABLE_EXTENDED_SB_STATUS_SHFT
/*
* number of entries in the destination side payload queue
@ -115,9 +162,16 @@
/*
* tuning the action when the numalink network is extremely delayed
*/
#define CONGESTED_RESPONSE_US 1000 /* 'long' response time, in microseconds */
#define CONGESTED_REPS 10 /* long delays averaged over this many broadcasts */
#define CONGESTED_PERIOD 30 /* time for the bau to be disabled, in seconds */
#define CONGESTED_RESPONSE_US 1000 /* 'long' response time, in
microseconds */
#define CONGESTED_REPS 10 /* long delays averaged over
this many broadcasts */
#define CONGESTED_PERIOD 30 /* time for the bau to be
disabled, in seconds */
/* see msg_type: */
#define MSG_NOOP 0
#define MSG_REGULAR 1
#define MSG_RETRY 2
/*
* Distribution: 32 bytes (256 bits) (bytes 0-0x1f of descriptor)
@ -129,8 +183,8 @@
* 'base_dest_nasid' field of the header corresponds to the
* destination nodeID associated with that specified bit.
*/
struct bau_target_uvhubmask {
unsigned long bits[BITS_TO_LONGS(UV_DISTRIBUTION_SIZE)];
struct bau_targ_hubmask {
unsigned long bits[BITS_TO_LONGS(UV_DISTRIBUTION_SIZE)];
};
/*
@ -139,7 +193,7 @@ struct bau_target_uvhubmask {
* enough bits for max. cpu's per uvhub)
*/
struct bau_local_cpumask {
unsigned long bits;
unsigned long bits;
};
/*
@ -160,14 +214,14 @@ struct bau_local_cpumask {
* The payload is software-defined for INTD transactions
*/
struct bau_msg_payload {
unsigned long address; /* signifies a page or all TLB's
of the cpu */
unsigned long address; /* signifies a page or all
TLB's of the cpu */
/* 64 bits */
unsigned short sending_cpu; /* filled in by sender */
unsigned short sending_cpu; /* filled in by sender */
/* 16 bits */
unsigned short acknowledge_count;/* filled in by destination */
unsigned short acknowledge_count; /* filled in by destination */
/* 16 bits */
unsigned int reserved1:32; /* not usable */
unsigned int reserved1:32; /* not usable */
};
@ -176,93 +230,96 @@ struct bau_msg_payload {
* see table 4.2.3.0.1 in broacast_assist spec.
*/
struct bau_msg_header {
unsigned int dest_subnodeid:6; /* must be 0x10, for the LB */
unsigned int dest_subnodeid:6; /* must be 0x10, for the LB */
/* bits 5:0 */
unsigned int base_dest_nasid:15; /* nasid of the */
/* bits 20:6 */ /* first bit in uvhub map */
unsigned int command:8; /* message type */
unsigned int base_dest_nasid:15; /* nasid of the first bit */
/* bits 20:6 */ /* in uvhub map */
unsigned int command:8; /* message type */
/* bits 28:21 */
/* 0x38: SN3net EndPoint Message */
unsigned int rsvd_1:3; /* must be zero */
/* 0x38: SN3net EndPoint Message */
unsigned int rsvd_1:3; /* must be zero */
/* bits 31:29 */
/* int will align on 32 bits */
unsigned int rsvd_2:9; /* must be zero */
/* int will align on 32 bits */
unsigned int rsvd_2:9; /* must be zero */
/* bits 40:32 */
/* Suppl_A is 56-41 */
unsigned int sequence:16;/* message sequence number */
/* bits 56:41 */ /* becomes bytes 16-17 of msg */
/* Address field (96:57) is never used as an
address (these are address bits 42:3) */
/* Suppl_A is 56-41 */
unsigned int sequence:16; /* message sequence number */
/* bits 56:41 */ /* becomes bytes 16-17 of msg */
/* Address field (96:57) is
never used as an address
(these are address bits
42:3) */
unsigned int rsvd_3:1; /* must be zero */
unsigned int rsvd_3:1; /* must be zero */
/* bit 57 */
/* address bits 27:4 are payload */
/* address bits 27:4 are payload */
/* these next 24 (58-81) bits become bytes 12-14 of msg */
/* bits 65:58 land in byte 12 */
unsigned int replied_to:1;/* sent as 0 by the source to byte 12 */
unsigned int replied_to:1; /* sent as 0 by the source to
byte 12 */
/* bit 58 */
unsigned int msg_type:3; /* software type of the message*/
unsigned int msg_type:3; /* software type of the
message */
/* bits 61:59 */
unsigned int canceled:1; /* message canceled, resource to be freed*/
unsigned int canceled:1; /* message canceled, resource
is to be freed*/
/* bit 62 */
unsigned int payload_1a:1;/* not currently used */
unsigned int payload_1a:1; /* not currently used */
/* bit 63 */
unsigned int payload_1b:2;/* not currently used */
unsigned int payload_1b:2; /* not currently used */
/* bits 65:64 */
/* bits 73:66 land in byte 13 */
unsigned int payload_1ca:6;/* not currently used */
unsigned int payload_1ca:6; /* not currently used */
/* bits 71:66 */
unsigned int payload_1c:2;/* not currently used */
unsigned int payload_1c:2; /* not currently used */
/* bits 73:72 */
/* bits 81:74 land in byte 14 */
unsigned int payload_1d:6;/* not currently used */
unsigned int payload_1d:6; /* not currently used */
/* bits 79:74 */
unsigned int payload_1e:2;/* not currently used */
unsigned int payload_1e:2; /* not currently used */
/* bits 81:80 */
unsigned int rsvd_4:7; /* must be zero */
unsigned int rsvd_4:7; /* must be zero */
/* bits 88:82 */
unsigned int sw_ack_flag:1;/* software acknowledge flag */
unsigned int swack_flag:1; /* software acknowledge flag */
/* bit 89 */
/* INTD trasactions at destination are to
wait for software acknowledge */
unsigned int rsvd_5:6; /* must be zero */
/* INTD trasactions at
destination are to wait for
software acknowledge */
unsigned int rsvd_5:6; /* must be zero */
/* bits 95:90 */
unsigned int rsvd_6:5; /* must be zero */
unsigned int rsvd_6:5; /* must be zero */
/* bits 100:96 */
unsigned int int_both:1;/* if 1, interrupt both sockets on the uvhub */
unsigned int int_both:1; /* if 1, interrupt both sockets
on the uvhub */
/* bit 101*/
unsigned int fairness:3;/* usually zero */
unsigned int fairness:3; /* usually zero */
/* bits 104:102 */
unsigned int multilevel:1; /* multi-level multicast format */
unsigned int multilevel:1; /* multi-level multicast
format */
/* bit 105 */
/* 0 for TLB: endpoint multi-unicast messages */
unsigned int chaining:1;/* next descriptor is part of this activation*/
/* 0 for TLB: endpoint multi-unicast messages */
unsigned int chaining:1; /* next descriptor is part of
this activation*/
/* bit 106 */
unsigned int rsvd_7:21; /* must be zero */
unsigned int rsvd_7:21; /* must be zero */
/* bits 127:107 */
};
/* see msg_type: */
#define MSG_NOOP 0
#define MSG_REGULAR 1
#define MSG_RETRY 2
/*
* The activation descriptor:
* The format of the message to send, plus all accompanying control
* Should be 64 bytes
*/
struct bau_desc {
struct bau_target_uvhubmask distribution;
struct bau_targ_hubmask distribution;
/*
* message template, consisting of header and payload:
*/
struct bau_msg_header header;
struct bau_msg_payload payload;
struct bau_msg_header header;
struct bau_msg_payload payload;
};
/*
* -payload-- ---------header------
@ -281,59 +338,51 @@ struct bau_desc {
* are 32 bytes (2 micropackets) (256 bits) in length, but contain only 17
* bytes of usable data, including the sw ack vector in byte 15 (bits 127:120)
* (12 bytes come from bau_msg_payload, 3 from payload_1, 2 from
* sw_ack_vector and payload_2)
* swack_vec and payload_2)
* "Enabling Software Acknowledgment mode (see Section 4.3.3 Software
* Acknowledge Processing) also selects 32 byte (17 bytes usable) payload
* operation."
*/
struct bau_payload_queue_entry {
unsigned long address; /* signifies a page or all TLB's
of the cpu */
struct bau_pq_entry {
unsigned long address; /* signifies a page or all TLB's
of the cpu */
/* 64 bits, bytes 0-7 */
unsigned short sending_cpu; /* cpu that sent the message */
unsigned short sending_cpu; /* cpu that sent the message */
/* 16 bits, bytes 8-9 */
unsigned short acknowledge_count; /* filled in by destination */
unsigned short acknowledge_count; /* filled in by destination */
/* 16 bits, bytes 10-11 */
/* these next 3 bytes come from bits 58-81 of the message header */
unsigned short replied_to:1; /* sent as 0 by the source */
unsigned short msg_type:3; /* software message type */
unsigned short canceled:1; /* sent as 0 by the source */
unsigned short unused1:3; /* not currently using */
unsigned short replied_to:1; /* sent as 0 by the source */
unsigned short msg_type:3; /* software message type */
unsigned short canceled:1; /* sent as 0 by the source */
unsigned short unused1:3; /* not currently using */
/* byte 12 */
unsigned char unused2a; /* not currently using */
unsigned char unused2a; /* not currently using */
/* byte 13 */
unsigned char unused2; /* not currently using */
unsigned char unused2; /* not currently using */
/* byte 14 */
unsigned char sw_ack_vector; /* filled in by the hardware */
unsigned char swack_vec; /* filled in by the hardware */
/* byte 15 (bits 127:120) */
unsigned short sequence; /* message sequence number */
unsigned short sequence; /* message sequence number */
/* bytes 16-17 */
unsigned char unused4[2]; /* not currently using bytes 18-19 */
unsigned char unused4[2]; /* not currently using bytes 18-19 */
/* bytes 18-19 */
int number_of_cpus; /* filled in at destination */
int number_of_cpus; /* filled in at destination */
/* 32 bits, bytes 20-23 (aligned) */
unsigned char unused5[8]; /* not using */
unsigned char unused5[8]; /* not using */
/* bytes 24-31 */
};
struct msg_desc {
struct bau_payload_queue_entry *msg;
int msg_slot;
int sw_ack_slot;
struct bau_payload_queue_entry *va_queue_first;
struct bau_payload_queue_entry *va_queue_last;
struct bau_pq_entry *msg;
int msg_slot;
int swack_slot;
struct bau_pq_entry *queue_first;
struct bau_pq_entry *queue_last;
};
struct reset_args {
int sender;
int sender;
};
/*
@ -341,112 +390,226 @@ struct reset_args {
*/
struct ptc_stats {
/* sender statistics */
unsigned long s_giveup; /* number of fall backs to IPI-style flushes */
unsigned long s_requestor; /* number of shootdown requests */
unsigned long s_stimeout; /* source side timeouts */
unsigned long s_dtimeout; /* destination side timeouts */
unsigned long s_time; /* time spent in sending side */
unsigned long s_retriesok; /* successful retries */
unsigned long s_ntargcpu; /* total number of cpu's targeted */
unsigned long s_ntargself; /* times the sending cpu was targeted */
unsigned long s_ntarglocals; /* targets of cpus on the local blade */
unsigned long s_ntargremotes; /* targets of cpus on remote blades */
unsigned long s_ntarglocaluvhub; /* targets of the local hub */
unsigned long s_ntargremoteuvhub; /* remotes hubs targeted */
unsigned long s_ntarguvhub; /* total number of uvhubs targeted */
unsigned long s_ntarguvhub16; /* number of times target hubs >= 16*/
unsigned long s_ntarguvhub8; /* number of times target hubs >= 8 */
unsigned long s_ntarguvhub4; /* number of times target hubs >= 4 */
unsigned long s_ntarguvhub2; /* number of times target hubs >= 2 */
unsigned long s_ntarguvhub1; /* number of times target hubs == 1 */
unsigned long s_resets_plug; /* ipi-style resets from plug state */
unsigned long s_resets_timeout; /* ipi-style resets from timeouts */
unsigned long s_busy; /* status stayed busy past s/w timer */
unsigned long s_throttles; /* waits in throttle */
unsigned long s_retry_messages; /* retry broadcasts */
unsigned long s_bau_reenabled; /* for bau enable/disable */
unsigned long s_bau_disabled; /* for bau enable/disable */
unsigned long s_giveup; /* number of fall backs to
IPI-style flushes */
unsigned long s_requestor; /* number of shootdown
requests */
unsigned long s_stimeout; /* source side timeouts */
unsigned long s_dtimeout; /* destination side timeouts */
unsigned long s_time; /* time spent in sending side */
unsigned long s_retriesok; /* successful retries */
unsigned long s_ntargcpu; /* total number of cpu's
targeted */
unsigned long s_ntargself; /* times the sending cpu was
targeted */
unsigned long s_ntarglocals; /* targets of cpus on the local
blade */
unsigned long s_ntargremotes; /* targets of cpus on remote
blades */
unsigned long s_ntarglocaluvhub; /* targets of the local hub */
unsigned long s_ntargremoteuvhub; /* remotes hubs targeted */
unsigned long s_ntarguvhub; /* total number of uvhubs
targeted */
unsigned long s_ntarguvhub16; /* number of times target
hubs >= 16*/
unsigned long s_ntarguvhub8; /* number of times target
hubs >= 8 */
unsigned long s_ntarguvhub4; /* number of times target
hubs >= 4 */
unsigned long s_ntarguvhub2; /* number of times target
hubs >= 2 */
unsigned long s_ntarguvhub1; /* number of times target
hubs == 1 */
unsigned long s_resets_plug; /* ipi-style resets from plug
state */
unsigned long s_resets_timeout; /* ipi-style resets from
timeouts */
unsigned long s_busy; /* status stayed busy past
s/w timer */
unsigned long s_throttles; /* waits in throttle */
unsigned long s_retry_messages; /* retry broadcasts */
unsigned long s_bau_reenabled; /* for bau enable/disable */
unsigned long s_bau_disabled; /* for bau enable/disable */
/* destination statistics */
unsigned long d_alltlb; /* times all tlb's on this cpu were flushed */
unsigned long d_onetlb; /* times just one tlb on this cpu was flushed */
unsigned long d_multmsg; /* interrupts with multiple messages */
unsigned long d_nomsg; /* interrupts with no message */
unsigned long d_time; /* time spent on destination side */
unsigned long d_requestee; /* number of messages processed */
unsigned long d_retries; /* number of retry messages processed */
unsigned long d_canceled; /* number of messages canceled by retries */
unsigned long d_nocanceled; /* retries that found nothing to cancel */
unsigned long d_resets; /* number of ipi-style requests processed */
unsigned long d_rcanceled; /* number of messages canceled by resets */
unsigned long d_alltlb; /* times all tlb's on this
cpu were flushed */
unsigned long d_onetlb; /* times just one tlb on this
cpu was flushed */
unsigned long d_multmsg; /* interrupts with multiple
messages */
unsigned long d_nomsg; /* interrupts with no message */
unsigned long d_time; /* time spent on destination
side */
unsigned long d_requestee; /* number of messages
processed */
unsigned long d_retries; /* number of retry messages
processed */
unsigned long d_canceled; /* number of messages canceled
by retries */
unsigned long d_nocanceled; /* retries that found nothing
to cancel */
unsigned long d_resets; /* number of ipi-style requests
processed */
unsigned long d_rcanceled; /* number of messages canceled
by resets */
};
struct tunables {
int *tunp;
int deflt;
};
struct hub_and_pnode {
short uvhub;
short pnode;
short uvhub;
short pnode;
};
struct socket_desc {
short num_cpus;
short cpu_number[MAX_CPUS_PER_SOCKET];
};
struct uvhub_desc {
unsigned short socket_mask;
short num_cpus;
short uvhub;
short pnode;
struct socket_desc socket[2];
};
/*
* one per-cpu; to locate the software tables
*/
struct bau_control {
struct bau_desc *descriptor_base;
struct bau_payload_queue_entry *va_queue_first;
struct bau_payload_queue_entry *va_queue_last;
struct bau_payload_queue_entry *bau_msg_head;
struct bau_control *uvhub_master;
struct bau_control *socket_master;
struct ptc_stats *statp;
unsigned long timeout_interval;
unsigned long set_bau_on_time;
atomic_t active_descriptor_count;
int plugged_tries;
int timeout_tries;
int ipi_attempts;
int conseccompletes;
int baudisabled;
int set_bau_off;
short cpu;
short osnode;
short uvhub_cpu;
short uvhub;
short cpus_in_socket;
short cpus_in_uvhub;
short partition_base_pnode;
unsigned short message_number;
unsigned short uvhub_quiesce;
short socket_acknowledge_count[DEST_Q_SIZE];
cycles_t send_message;
spinlock_t uvhub_lock;
spinlock_t queue_lock;
struct bau_desc *descriptor_base;
struct bau_pq_entry *queue_first;
struct bau_pq_entry *queue_last;
struct bau_pq_entry *bau_msg_head;
struct bau_control *uvhub_master;
struct bau_control *socket_master;
struct ptc_stats *statp;
unsigned long timeout_interval;
unsigned long set_bau_on_time;
atomic_t active_descriptor_count;
int plugged_tries;
int timeout_tries;
int ipi_attempts;
int conseccompletes;
int baudisabled;
int set_bau_off;
short cpu;
short osnode;
short uvhub_cpu;
short uvhub;
short cpus_in_socket;
short cpus_in_uvhub;
short partition_base_pnode;
unsigned short message_number;
unsigned short uvhub_quiesce;
short socket_acknowledge_count[DEST_Q_SIZE];
cycles_t send_message;
spinlock_t uvhub_lock;
spinlock_t queue_lock;
/* tunables */
int max_bau_concurrent;
int max_bau_concurrent_constant;
int plugged_delay;
int plugsb4reset;
int timeoutsb4reset;
int ipi_reset_limit;
int complete_threshold;
int congested_response_us;
int congested_reps;
int congested_period;
cycles_t period_time;
long period_requests;
struct hub_and_pnode *target_hub_and_pnode;
int max_concurr;
int max_concurr_const;
int plugged_delay;
int plugsb4reset;
int timeoutsb4reset;
int ipi_reset_limit;
int complete_threshold;
int cong_response_us;
int cong_reps;
int cong_period;
cycles_t period_time;
long period_requests;
struct hub_and_pnode *thp;
};
static inline int bau_uvhub_isset(int uvhub, struct bau_target_uvhubmask *dstp)
static unsigned long read_mmr_uv2_status(void)
{
return read_lmmr(UV2H_LB_BAU_SB_ACTIVATION_STATUS_2);
}
static void write_mmr_data_broadcast(int pnode, unsigned long mmr_image)
{
write_gmmr(pnode, UVH_BAU_DATA_BROADCAST, mmr_image);
}
static void write_mmr_descriptor_base(int pnode, unsigned long mmr_image)
{
write_gmmr(pnode, UVH_LB_BAU_SB_DESCRIPTOR_BASE, mmr_image);
}
static void write_mmr_activation(unsigned long index)
{
write_lmmr(UVH_LB_BAU_SB_ACTIVATION_CONTROL, index);
}
static void write_gmmr_activation(int pnode, unsigned long mmr_image)
{
write_gmmr(pnode, UVH_LB_BAU_SB_ACTIVATION_CONTROL, mmr_image);
}
static void write_mmr_payload_first(int pnode, unsigned long mmr_image)
{
write_gmmr(pnode, UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST, mmr_image);
}
static void write_mmr_payload_tail(int pnode, unsigned long mmr_image)
{
write_gmmr(pnode, UVH_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL, mmr_image);
}
static void write_mmr_payload_last(int pnode, unsigned long mmr_image)
{
write_gmmr(pnode, UVH_LB_BAU_INTD_PAYLOAD_QUEUE_LAST, mmr_image);
}
static void write_mmr_misc_control(int pnode, unsigned long mmr_image)
{
write_gmmr(pnode, UVH_LB_BAU_MISC_CONTROL, mmr_image);
}
static unsigned long read_mmr_misc_control(int pnode)
{
return read_gmmr(pnode, UVH_LB_BAU_MISC_CONTROL);
}
static void write_mmr_sw_ack(unsigned long mr)
{
uv_write_local_mmr(UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS, mr);
}
static unsigned long read_mmr_sw_ack(void)
{
return read_lmmr(UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE);
}
static unsigned long read_gmmr_sw_ack(int pnode)
{
return read_gmmr(pnode, UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE);
}
static void write_mmr_data_config(int pnode, unsigned long mr)
{
uv_write_global_mmr64(pnode, UVH_BAU_DATA_CONFIG, mr);
}
static inline int bau_uvhub_isset(int uvhub, struct bau_targ_hubmask *dstp)
{
return constant_test_bit(uvhub, &dstp->bits[0]);
}
static inline void bau_uvhub_set(int pnode, struct bau_target_uvhubmask *dstp)
static inline void bau_uvhub_set(int pnode, struct bau_targ_hubmask *dstp)
{
__set_bit(pnode, &dstp->bits[0]);
}
static inline void bau_uvhubs_clear(struct bau_target_uvhubmask *dstp,
static inline void bau_uvhubs_clear(struct bau_targ_hubmask *dstp,
int nbits)
{
bitmap_zero(&dstp->bits[0], nbits);
}
static inline int bau_uvhub_weight(struct bau_target_uvhubmask *dstp)
static inline int bau_uvhub_weight(struct bau_targ_hubmask *dstp)
{
return bitmap_weight((unsigned long *)&dstp->bits[0],
UV_DISTRIBUTION_SIZE);
@ -457,9 +620,6 @@ static inline void bau_cpubits_clear(struct bau_local_cpumask *dstp, int nbits)
bitmap_zero(&dstp->bits, nbits);
}
#define cpubit_isset(cpu, bau_local_cpumask) \
test_bit((cpu), (bau_local_cpumask).bits)
extern void uv_bau_message_intr1(void);
extern void uv_bau_timeout_intr1(void);
@ -467,7 +627,7 @@ struct atomic_short {
short counter;
};
/**
/*
* atomic_read_short - read a short atomic variable
* @v: pointer of type atomic_short
*
@ -478,14 +638,14 @@ static inline int atomic_read_short(const struct atomic_short *v)
return v->counter;
}
/**
* atomic_add_short_return - add and return a short int
/*
* atom_asr - add and return a short int
* @i: short value to add
* @v: pointer of type atomic_short
*
* Atomically adds @i to @v and returns @i + @v
*/
static inline int atomic_add_short_return(short i, struct atomic_short *v)
static inline int atom_asr(short i, struct atomic_short *v)
{
short __i = i;
asm volatile(LOCK_PREFIX "xaddw %0, %1"
@ -494,4 +654,26 @@ static inline int atomic_add_short_return(short i, struct atomic_short *v)
return i + __i;
}
/*
* conditionally add 1 to *v, unless *v is >= u
* return 0 if we cannot add 1 to *v because it is >= u
* return 1 if we can add 1 to *v because it is < u
* the add is atomic
*
* This is close to atomic_add_unless(), but this allows the 'u' value
* to be lowered below the current 'v'. atomic_add_unless can only stop
* on equal.
*/
static inline int atomic_inc_unless_ge(spinlock_t *lock, atomic_t *v, int u)
{
spin_lock(lock);
if (atomic_read(v) >= u) {
spin_unlock(lock);
return 0;
}
atomic_inc(v);
spin_unlock(lock);
return 1;
}
#endif /* _ASM_X86_UV_UV_BAU_H */

View file

@ -77,8 +77,9 @@
*
* 1111110000000000
* 5432109876543210
* pppppppppplc0cch Nehalem-EX
* ppppppppplcc0cch Westmere-EX
* pppppppppplc0cch Nehalem-EX (12 bits in hdw reg)
* ppppppppplcc0cch Westmere-EX (12 bits in hdw reg)
* pppppppppppcccch SandyBridge (15 bits in hdw reg)
* sssssssssss
*
* p = pnode bits
@ -87,7 +88,7 @@
* h = hyperthread
* s = bits that are in the SOCKET_ID CSR
*
* Note: Processor only supports 12 bits in the APICID register. The ACPI
* Note: Processor may support fewer bits in the APICID register. The ACPI
* tables hold all 16 bits. Software needs to be aware of this.
*
* Unless otherwise specified, all references to APICID refer to
@ -138,6 +139,8 @@ struct uv_hub_info_s {
unsigned long global_mmr_base;
unsigned long gpa_mask;
unsigned int gnode_extra;
unsigned char hub_revision;
unsigned char apic_pnode_shift;
unsigned long gnode_upper;
unsigned long lowmem_remap_top;
unsigned long lowmem_remap_base;
@ -149,13 +152,31 @@ struct uv_hub_info_s {
unsigned char m_val;
unsigned char n_val;
struct uv_scir_s scir;
unsigned char apic_pnode_shift;
};
DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
#define uv_hub_info (&__get_cpu_var(__uv_hub_info))
#define uv_cpu_hub_info(cpu) (&per_cpu(__uv_hub_info, cpu))
/*
* Hub revisions less than UV2_HUB_REVISION_BASE are UV1 hubs. All UV2
* hubs have revision numbers greater than or equal to UV2_HUB_REVISION_BASE.
* This is a software convention - NOT the hardware revision numbers in
* the hub chip.
*/
#define UV1_HUB_REVISION_BASE 1
#define UV2_HUB_REVISION_BASE 3
static inline int is_uv1_hub(void)
{
return uv_hub_info->hub_revision < UV2_HUB_REVISION_BASE;
}
static inline int is_uv2_hub(void)
{
return uv_hub_info->hub_revision >= UV2_HUB_REVISION_BASE;
}
union uvh_apicid {
unsigned long v;
struct uvh_apicid_s {
@ -180,11 +201,25 @@ union uvh_apicid {
#define UV_PNODE_TO_GNODE(p) ((p) |uv_hub_info->gnode_extra)
#define UV_PNODE_TO_NASID(p) (UV_PNODE_TO_GNODE(p) << 1)
#define UV_LOCAL_MMR_BASE 0xf4000000UL
#define UV_GLOBAL_MMR32_BASE 0xf8000000UL
#define UV1_LOCAL_MMR_BASE 0xf4000000UL
#define UV1_GLOBAL_MMR32_BASE 0xf8000000UL
#define UV1_LOCAL_MMR_SIZE (64UL * 1024 * 1024)
#define UV1_GLOBAL_MMR32_SIZE (64UL * 1024 * 1024)
#define UV2_LOCAL_MMR_BASE 0xfa000000UL
#define UV2_GLOBAL_MMR32_BASE 0xfc000000UL
#define UV2_LOCAL_MMR_SIZE (32UL * 1024 * 1024)
#define UV2_GLOBAL_MMR32_SIZE (32UL * 1024 * 1024)
#define UV_LOCAL_MMR_BASE (is_uv1_hub() ? UV1_LOCAL_MMR_BASE \
: UV2_LOCAL_MMR_BASE)
#define UV_GLOBAL_MMR32_BASE (is_uv1_hub() ? UV1_GLOBAL_MMR32_BASE \
: UV2_GLOBAL_MMR32_BASE)
#define UV_LOCAL_MMR_SIZE (is_uv1_hub() ? UV1_LOCAL_MMR_SIZE : \
UV2_LOCAL_MMR_SIZE)
#define UV_GLOBAL_MMR32_SIZE (is_uv1_hub() ? UV1_GLOBAL_MMR32_SIZE :\
UV2_GLOBAL_MMR32_SIZE)
#define UV_GLOBAL_MMR64_BASE (uv_hub_info->global_mmr_base)
#define UV_LOCAL_MMR_SIZE (64UL * 1024 * 1024)
#define UV_GLOBAL_MMR32_SIZE (64UL * 1024 * 1024)
#define UV_GLOBAL_GRU_MMR_BASE 0x4000000
@ -300,6 +335,17 @@ static inline int uv_apicid_to_pnode(int apicid)
return (apicid >> uv_hub_info->apic_pnode_shift);
}
/*
* Convert an apicid to the socket number on the blade
*/
static inline int uv_apicid_to_socket(int apicid)
{
if (is_uv1_hub())
return (apicid >> (uv_hub_info->apic_pnode_shift - 1)) & 1;
else
return 0;
}
/*
* Access global MMRs using the low memory MMR32 space. This region supports
* faster MMR access but not all MMRs are accessible in this space.
@ -519,14 +565,13 @@ static inline void uv_hub_send_ipi(int pnode, int apicid, int vector)
/*
* Get the minimum revision number of the hub chips within the partition.
* 1 - initial rev 1.0 silicon
* 2 - rev 2.0 production silicon
* 1 - UV1 rev 1.0 initial silicon
* 2 - UV1 rev 2.0 production silicon
* 3 - UV2 rev 1.0 initial silicon
*/
static inline int uv_get_min_hub_revision_id(void)
{
extern int uv_min_hub_revision_id;
return uv_min_hub_revision_id;
return uv_hub_info->hub_revision;
}
#endif /* CONFIG_X86_64 */

File diff suppressed because it is too large Load diff

View file

@ -8,6 +8,7 @@ CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE)
ifdef CONFIG_FUNCTION_TRACER
# Do not profile debug and lowlevel utilities
CFLAGS_REMOVE_tsc.o = -pg
CFLAGS_REMOVE_rtc.o = -pg
CFLAGS_REMOVE_paravirt-spinlocks.o = -pg
CFLAGS_REMOVE_pvclock.o = -pg
@ -28,6 +29,7 @@ CFLAGS_paravirt.o := $(nostackp)
GCOV_PROFILE_vsyscall_64.o := n
GCOV_PROFILE_hpet.o := n
GCOV_PROFILE_tsc.o := n
GCOV_PROFILE_vread_tsc_64.o := n
GCOV_PROFILE_paravirt.o := n
# vread_tsc_64 is hot and should be fully optimized:

View file

@ -91,6 +91,10 @@ static int __init early_get_pnodeid(void)
m_n_config.v = uv_early_read_mmr(UVH_RH_GAM_CONFIG_MMR);
uv_min_hub_revision_id = node_id.s.revision;
if (node_id.s.part_number == UV2_HUB_PART_NUMBER)
uv_min_hub_revision_id += UV2_HUB_REVISION_BASE - 1;
uv_hub_info->hub_revision = uv_min_hub_revision_id;
pnode = (node_id.s.node_id >> 1) & ((1 << m_n_config.s.n_skt) - 1);
return pnode;
}
@ -112,17 +116,25 @@ static void __init early_get_apic_pnode_shift(void)
*/
static void __init uv_set_apicid_hibit(void)
{
union uvh_lb_target_physical_apic_id_mask_u apicid_mask;
union uv1h_lb_target_physical_apic_id_mask_u apicid_mask;
apicid_mask.v = uv_early_read_mmr(UVH_LB_TARGET_PHYSICAL_APIC_ID_MASK);
uv_apicid_hibits = apicid_mask.s.bit_enables & UV_APICID_HIBIT_MASK;
if (is_uv1_hub()) {
apicid_mask.v =
uv_early_read_mmr(UV1H_LB_TARGET_PHYSICAL_APIC_ID_MASK);
uv_apicid_hibits =
apicid_mask.s1.bit_enables & UV_APICID_HIBIT_MASK;
}
}
static int __init uv_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
{
int pnodeid;
int pnodeid, is_uv1, is_uv2;
if (!strcmp(oem_id, "SGI")) {
is_uv1 = !strcmp(oem_id, "SGI");
is_uv2 = !strcmp(oem_id, "SGI2");
if (is_uv1 || is_uv2) {
uv_hub_info->hub_revision =
is_uv1 ? UV1_HUB_REVISION_BASE : UV2_HUB_REVISION_BASE;
pnodeid = early_get_pnodeid();
early_get_apic_pnode_shift();
x86_platform.is_untracked_pat_range = uv_is_untracked_pat_range;
@ -484,12 +496,19 @@ static __init void map_mmr_high(int max_pnode)
static __init void map_mmioh_high(int max_pnode)
{
union uvh_rh_gam_mmioh_overlay_config_mmr_u mmioh;
int shift = UVH_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_BASE_SHFT;
int shift;
mmioh.v = uv_read_local_mmr(UVH_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR);
if (mmioh.s.enable)
map_high("MMIOH", mmioh.s.base, shift, mmioh.s.m_io,
if (is_uv1_hub() && mmioh.s1.enable) {
shift = UV1H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_BASE_SHFT;
map_high("MMIOH", mmioh.s1.base, shift, mmioh.s1.m_io,
max_pnode, map_uc);
}
if (is_uv2_hub() && mmioh.s2.enable) {
shift = UV2H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_BASE_SHFT;
map_high("MMIOH", mmioh.s2.base, shift, mmioh.s2.m_io,
max_pnode, map_uc);
}
}
static __init void map_low_mmrs(void)
@ -736,13 +755,14 @@ void __init uv_system_init(void)
unsigned long mmr_base, present, paddr;
unsigned short pnode_mask, pnode_io_mask;
printk(KERN_INFO "UV: Found %s hub\n", is_uv1_hub() ? "UV1" : "UV2");
map_low_mmrs();
m_n_config.v = uv_read_local_mmr(UVH_RH_GAM_CONFIG_MMR );
m_val = m_n_config.s.m_skt;
n_val = m_n_config.s.n_skt;
mmioh.v = uv_read_local_mmr(UVH_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR);
n_io = mmioh.s.n_io;
n_io = is_uv1_hub() ? mmioh.s1.n_io : mmioh.s2.n_io;
mmr_base =
uv_read_local_mmr(UVH_RH_GAM_MMR_OVERLAY_CONFIG_MMR) &
~UV_MMR_ENABLE;
@ -811,6 +831,8 @@ void __init uv_system_init(void)
*/
uv_cpu_hub_info(cpu)->pnode_mask = pnode_mask;
uv_cpu_hub_info(cpu)->apic_pnode_shift = uvh_apicid.s.pnode_shift;
uv_cpu_hub_info(cpu)->hub_revision = uv_hub_info->hub_revision;
pnode = uv_apicid_to_pnode(apicid);
blade = boot_pnode_to_blade(pnode);
lcpu = uv_blade_info[blade].nr_possible_cpus;

View file

@ -361,6 +361,7 @@ struct apm_user {
* idle percentage above which bios idle calls are done
*/
#ifdef CONFIG_APM_CPU_IDLE
#warning deprecated CONFIG_APM_CPU_IDLE will be deleted in 2012
#define DEFAULT_IDLE_THRESHOLD 95
#else
#define DEFAULT_IDLE_THRESHOLD 100
@ -904,6 +905,7 @@ static void apm_cpu_idle(void)
unsigned int jiffies_since_last_check = jiffies - last_jiffies;
unsigned int bucket;
WARN_ONCE(1, "deprecated apm_cpu_idle will be deleted in 2012");
recalc:
if (jiffies_since_last_check > IDLE_CALC_LIMIT) {
use_apm_idle = 0;

View file

@ -612,8 +612,11 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
}
#endif
/* As a rule processors have APIC timer running in deep C states */
if (c->x86 > 0xf && !cpu_has_amd_erratum(amd_erratum_400))
/*
* Family 0x12 and above processors have APIC timer
* running in deep C states.
*/
if (c->x86 > 0x11)
set_cpu_cap(c, X86_FEATURE_ARAT);
/*

View file

@ -19,6 +19,7 @@
static int __init no_halt(char *s)
{
WARN_ONCE(1, "\"no-hlt\" is deprecated, please use \"idle=poll\"\n");
boot_cpu_data.hlt_works_ok = 0;
return 1;
}

View file

@ -477,13 +477,6 @@ void __cpuinit detect_ht(struct cpuinfo_x86 *c)
if (smp_num_siblings <= 1)
goto out;
if (smp_num_siblings > nr_cpu_ids) {
pr_warning("CPU: Unsupported number of siblings %d",
smp_num_siblings);
smp_num_siblings = 1;
return;
}
index_msb = get_count_order(smp_num_siblings);
c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, index_msb);
@ -909,7 +902,7 @@ static void vgetcpu_set_mode(void)
void __init identify_boot_cpu(void)
{
identify_cpu(&boot_cpu_data);
init_c1e_mask();
init_amd_e400_c1e_mask();
#ifdef CONFIG_X86_32
sysenter_setup();
enable_sep_cpu();

View file

@ -123,7 +123,7 @@ static unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
static atomic_t nmi_running = ATOMIC_INIT(0);
static int mod_code_status; /* holds return value of text write */
static void *mod_code_ip; /* holds the IP to write to */
static void *mod_code_newcode; /* holds the text to write to the IP */
static const void *mod_code_newcode; /* holds the text to write to the IP */
static unsigned nmi_wait_count;
static atomic_t nmi_update_count = ATOMIC_INIT(0);
@ -225,7 +225,7 @@ within(unsigned long addr, unsigned long start, unsigned long end)
}
static int
do_ftrace_mod_code(unsigned long ip, void *new_code)
do_ftrace_mod_code(unsigned long ip, const void *new_code)
{
/*
* On x86_64, kernel text mappings are mapped read-only with
@ -266,8 +266,8 @@ static const unsigned char *ftrace_nop_replace(void)
}
static int
ftrace_modify_code(unsigned long ip, unsigned char *old_code,
unsigned char *new_code)
ftrace_modify_code(unsigned long ip, unsigned const char *old_code,
unsigned const char *new_code)
{
unsigned char replaced[MCOUNT_INSN_SIZE];
@ -301,7 +301,7 @@ ftrace_modify_code(unsigned long ip, unsigned char *old_code,
int ftrace_make_nop(struct module *mod,
struct dyn_ftrace *rec, unsigned long addr)
{
unsigned char *new, *old;
unsigned const char *new, *old;
unsigned long ip = rec->ip;
old = ftrace_call_replace(ip, addr);
@ -312,7 +312,7 @@ int ftrace_make_nop(struct module *mod,
int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
unsigned char *new, *old;
unsigned const char *new, *old;
unsigned long ip = rec->ip;
old = ftrace_nop_replace();

View file

@ -337,7 +337,9 @@ EXPORT_SYMBOL(boot_option_idle_override);
* Powermanagement idle function, if any..
*/
void (*pm_idle)(void);
#if defined(CONFIG_APM_MODULE) && defined(CONFIG_APM_CPU_IDLE)
EXPORT_SYMBOL(pm_idle);
#endif
#ifdef CONFIG_X86_32
/*
@ -397,7 +399,7 @@ void default_idle(void)
cpu_relax();
}
}
#ifdef CONFIG_APM_MODULE
#if defined(CONFIG_APM_MODULE) && defined(CONFIG_APM_CPU_IDLE)
EXPORT_SYMBOL(default_idle);
#endif
@ -535,45 +537,45 @@ int mwait_usable(const struct cpuinfo_x86 *c)
return (edx & MWAIT_EDX_C1);
}
bool c1e_detected;
EXPORT_SYMBOL(c1e_detected);
bool amd_e400_c1e_detected;
EXPORT_SYMBOL(amd_e400_c1e_detected);
static cpumask_var_t c1e_mask;
static cpumask_var_t amd_e400_c1e_mask;
void c1e_remove_cpu(int cpu)
void amd_e400_remove_cpu(int cpu)
{
if (c1e_mask != NULL)
cpumask_clear_cpu(cpu, c1e_mask);
if (amd_e400_c1e_mask != NULL)
cpumask_clear_cpu(cpu, amd_e400_c1e_mask);
}
/*
* C1E aware idle routine. We check for C1E active in the interrupt
* AMD Erratum 400 aware idle routine. We check for C1E active in the interrupt
* pending message MSR. If we detect C1E, then we handle it the same
* way as C3 power states (local apic timer and TSC stop)
*/
static void c1e_idle(void)
static void amd_e400_idle(void)
{
if (need_resched())
return;
if (!c1e_detected) {
if (!amd_e400_c1e_detected) {
u32 lo, hi;
rdmsr(MSR_K8_INT_PENDING_MSG, lo, hi);
if (lo & K8_INTP_C1E_ACTIVE_MASK) {
c1e_detected = true;
amd_e400_c1e_detected = true;
if (!boot_cpu_has(X86_FEATURE_NONSTOP_TSC))
mark_tsc_unstable("TSC halt in AMD C1E");
printk(KERN_INFO "System has AMD C1E enabled\n");
}
}
if (c1e_detected) {
if (amd_e400_c1e_detected) {
int cpu = smp_processor_id();
if (!cpumask_test_cpu(cpu, c1e_mask)) {
cpumask_set_cpu(cpu, c1e_mask);
if (!cpumask_test_cpu(cpu, amd_e400_c1e_mask)) {
cpumask_set_cpu(cpu, amd_e400_c1e_mask);
/*
* Force broadcast so ACPI can not interfere.
*/
@ -616,17 +618,17 @@ void __cpuinit select_idle_routine(const struct cpuinfo_x86 *c)
pm_idle = mwait_idle;
} else if (cpu_has_amd_erratum(amd_erratum_400)) {
/* E400: APIC timer interrupt does not wake up CPU from C1e */
printk(KERN_INFO "using C1E aware idle routine\n");
pm_idle = c1e_idle;
printk(KERN_INFO "using AMD E400 aware idle routine\n");
pm_idle = amd_e400_idle;
} else
pm_idle = default_idle;
}
void __init init_c1e_mask(void)
void __init init_amd_e400_c1e_mask(void)
{
/* If we're using c1e_idle, we need to allocate c1e_mask. */
if (pm_idle == c1e_idle)
zalloc_cpumask_var(&c1e_mask, GFP_KERNEL);
/* If we're using amd_e400_idle, we need to allocate amd_e400_c1e_mask. */
if (pm_idle == amd_e400_idle)
zalloc_cpumask_var(&amd_e400_c1e_mask, GFP_KERNEL);
}
static int __init idle_setup(char *str)
@ -640,6 +642,7 @@ static int __init idle_setup(char *str)
boot_option_idle_override = IDLE_POLL;
} else if (!strcmp(str, "mwait")) {
boot_option_idle_override = IDLE_FORCE_MWAIT;
WARN_ONCE(1, "\"idle=mwait\" will be removed in 2012\n");
} else if (!strcmp(str, "halt")) {
/*
* When the boot option of idle=halt is added, halt is

View file

@ -910,6 +910,13 @@ void __init setup_arch(char **cmdline_p)
memblock.current_limit = get_max_mapped();
memblock_x86_fill();
/*
* The EFI specification says that boot service code won't be called
* after ExitBootServices(). This is, in fact, a lie.
*/
if (efi_enabled)
efi_reserve_boot_services();
/* preallocate 4k for mptable mpc */
early_reserve_e820_mpc_new();

View file

@ -1307,7 +1307,7 @@ void play_dead_common(void)
{
idle_task_exit();
reset_lazy_tlbstate();
c1e_remove_cpu(raw_smp_processor_id());
amd_e400_remove_cpu(raw_smp_processor_id());
mb();
/* Ack it */
@ -1332,7 +1332,7 @@ static inline void mwait_play_dead(void)
void *mwait_ptr;
struct cpuinfo_x86 *c = __this_cpu_ptr(&cpu_info);
if (!this_cpu_has(X86_FEATURE_MWAIT) && mwait_usable(c))
if (!(this_cpu_has(X86_FEATURE_MWAIT) && mwait_usable(c)))
return;
if (!this_cpu_has(X86_FEATURE_CLFLSH))
return;

View file

@ -993,6 +993,7 @@ static void lguest_time_irq(unsigned int irq, struct irq_desc *desc)
static void lguest_time_init(void)
{
/* Set up the timer interrupt (0) to go to our simple timer routine */
lguest_setup_irq(0);
irq_set_handler(0, lguest_time_irq);
clocksource_register_hz(&lguest_clock, NSEC_PER_SEC);

View file

@ -823,16 +823,30 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address,
force_sig_info_fault(SIGBUS, code, address, tsk, fault);
}
static noinline void
static noinline int
mm_fault_error(struct pt_regs *regs, unsigned long error_code,
unsigned long address, unsigned int fault)
{
/*
* Pagefault was interrupted by SIGKILL. We have no reason to
* continue pagefault.
*/
if (fatal_signal_pending(current)) {
if (!(fault & VM_FAULT_RETRY))
up_read(&current->mm->mmap_sem);
if (!(error_code & PF_USER))
no_context(regs, error_code, address);
return 1;
}
if (!(fault & VM_FAULT_ERROR))
return 0;
if (fault & VM_FAULT_OOM) {
/* Kernel mode? Handle exceptions or die: */
if (!(error_code & PF_USER)) {
up_read(&current->mm->mmap_sem);
no_context(regs, error_code, address);
return;
return 1;
}
out_of_memory(regs, error_code, address);
@ -843,6 +857,7 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
else
BUG();
}
return 1;
}
static int spurious_fault_check(unsigned long error_code, pte_t *pte)
@ -1133,19 +1148,9 @@ good_area:
*/
fault = handle_mm_fault(mm, vma, address, flags);
if (unlikely(fault & VM_FAULT_ERROR)) {
mm_fault_error(regs, error_code, address, fault);
return;
}
/*
* Pagefault was interrupted by SIGKILL. We have no reason to
* continue pagefault.
*/
if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) {
if (!(error_code & PF_USER))
no_context(regs, error_code, address);
return;
if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) {
if (mm_fault_error(regs, error_code, address, fault))
return;
}
/*

View file

@ -316,16 +316,23 @@ static void op_amd_stop_ibs(void)
wrmsrl(MSR_AMD64_IBSOPCTL, 0);
}
static inline int eilvt_is_available(int offset)
static inline int get_eilvt(int offset)
{
/* check if we may assign a vector */
return !setup_APIC_eilvt(offset, 0, APIC_EILVT_MSG_NMI, 1);
}
static inline int put_eilvt(int offset)
{
return !setup_APIC_eilvt(offset, 0, 0, 1);
}
static inline int ibs_eilvt_valid(void)
{
int offset;
u64 val;
int valid = 0;
preempt_disable();
rdmsrl(MSR_AMD64_IBSCTL, val);
offset = val & IBSCTL_LVT_OFFSET_MASK;
@ -333,16 +340,20 @@ static inline int ibs_eilvt_valid(void)
if (!(val & IBSCTL_LVT_OFFSET_VALID)) {
pr_err(FW_BUG "cpu %d, invalid IBS interrupt offset %d (MSR%08X=0x%016llx)\n",
smp_processor_id(), offset, MSR_AMD64_IBSCTL, val);
return 0;
goto out;
}
if (!eilvt_is_available(offset)) {
if (!get_eilvt(offset)) {
pr_err(FW_BUG "cpu %d, IBS interrupt offset %d not available (MSR%08X=0x%016llx)\n",
smp_processor_id(), offset, MSR_AMD64_IBSCTL, val);
return 0;
goto out;
}
return 1;
valid = 1;
out:
preempt_enable();
return valid;
}
static inline int get_ibs_offset(void)
@ -600,67 +611,69 @@ static int setup_ibs_ctl(int ibs_eilvt_off)
static int force_ibs_eilvt_setup(void)
{
int i;
int offset;
int ret;
/* find the next free available EILVT entry */
for (i = 1; i < 4; i++) {
if (!eilvt_is_available(i))
continue;
ret = setup_ibs_ctl(i);
if (ret)
return ret;
pr_err(FW_BUG "using offset %d for IBS interrupts\n", i);
return 0;
/*
* find the next free available EILVT entry, skip offset 0,
* pin search to this cpu
*/
preempt_disable();
for (offset = 1; offset < APIC_EILVT_NR_MAX; offset++) {
if (get_eilvt(offset))
break;
}
preempt_enable();
if (offset == APIC_EILVT_NR_MAX) {
printk(KERN_DEBUG "No EILVT entry available\n");
return -EBUSY;
}
printk(KERN_DEBUG "No EILVT entry available\n");
return -EBUSY;
}
static int __init_ibs_nmi(void)
{
int ret;
if (ibs_eilvt_valid())
return 0;
ret = force_ibs_eilvt_setup();
ret = setup_ibs_ctl(offset);
if (ret)
return ret;
goto out;
if (!ibs_eilvt_valid())
return -EFAULT;
if (!ibs_eilvt_valid()) {
ret = -EFAULT;
goto out;
}
pr_err(FW_BUG "using offset %d for IBS interrupts\n", offset);
pr_err(FW_BUG "workaround enabled for IBS LVT offset\n");
return 0;
out:
preempt_disable();
put_eilvt(offset);
preempt_enable();
return ret;
}
/*
* check and reserve APIC extended interrupt LVT offset for IBS if
* available
*
* init_ibs() preforms implicitly cpu-local operations, so pin this
* thread to its current CPU
*/
static void init_ibs(void)
{
preempt_disable();
ibs_caps = get_ibs_caps();
if (!ibs_caps)
return;
if (ibs_eilvt_valid())
goto out;
if (__init_ibs_nmi() < 0)
ibs_caps = 0;
else
printk(KERN_INFO "oprofile: AMD IBS detected (0x%08x)\n", ibs_caps);
if (!force_ibs_eilvt_setup())
goto out;
/* Failed to setup ibs */
ibs_caps = 0;
return;
out:
preempt_enable();
printk(KERN_INFO "oprofile: AMD IBS detected (0x%08x)\n", ibs_caps);
}
static int (*create_arch_files)(struct super_block *sb, struct dentry *root);

View file

@ -304,6 +304,40 @@ static void __init print_efi_memmap(void)
}
#endif /* EFI_DEBUG */
void __init efi_reserve_boot_services(void)
{
void *p;
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
efi_memory_desc_t *md = p;
unsigned long long start = md->phys_addr;
unsigned long long size = md->num_pages << EFI_PAGE_SHIFT;
if (md->type != EFI_BOOT_SERVICES_CODE &&
md->type != EFI_BOOT_SERVICES_DATA)
continue;
memblock_x86_reserve_range(start, start + size, "EFI Boot");
}
}
static void __init efi_free_boot_services(void)
{
void *p;
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
efi_memory_desc_t *md = p;
unsigned long long start = md->phys_addr;
unsigned long long size = md->num_pages << EFI_PAGE_SHIFT;
if (md->type != EFI_BOOT_SERVICES_CODE &&
md->type != EFI_BOOT_SERVICES_DATA)
continue;
free_bootmem_late(start, size);
}
}
void __init efi_init(void)
{
efi_config_table_t *config_tables;
@ -536,7 +570,9 @@ void __init efi_enter_virtual_mode(void)
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
md = p;
if (!(md->attribute & EFI_MEMORY_RUNTIME))
if (!(md->attribute & EFI_MEMORY_RUNTIME) &&
md->type != EFI_BOOT_SERVICES_CODE &&
md->type != EFI_BOOT_SERVICES_DATA)
continue;
size = md->num_pages << EFI_PAGE_SHIFT;
@ -592,6 +628,13 @@ void __init efi_enter_virtual_mode(void)
panic("EFI call to SetVirtualAddressMap() failed!");
}
/*
* Thankfully, it does seem that no runtime services other than
* SetVirtualAddressMap() will touch boot services code, so we can
* get rid of it all at this point
*/
efi_free_boot_services();
/*
* Now that EFI is in virtual mode, update the function
* pointers in the runtime service table to the new virtual addresses.

View file

@ -49,10 +49,11 @@ static void __init early_code_mapping_set_exec(int executable)
if (!(__supported_pte_mask & _PAGE_NX))
return;
/* Make EFI runtime service code area executable */
/* Make EFI service code area executable */
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
md = p;
if (md->type == EFI_RUNTIME_SERVICES_CODE)
if (md->type == EFI_RUNTIME_SERVICES_CODE ||
md->type == EFI_BOOT_SERVICES_CODE)
efi_set_executable(md, executable);
}
}

File diff suppressed because it is too large Load diff

View file

@ -99,8 +99,12 @@ static void uv_rtc_send_IPI(int cpu)
/* Check for an RTC interrupt pending */
static int uv_intr_pending(int pnode)
{
return uv_read_global_mmr64(pnode, UVH_EVENT_OCCURRED0) &
UVH_EVENT_OCCURRED0_RTC1_MASK;
if (is_uv1_hub())
return uv_read_global_mmr64(pnode, UVH_EVENT_OCCURRED0) &
UV1H_EVENT_OCCURRED0_RTC1_MASK;
else
return uv_read_global_mmr64(pnode, UV2H_EVENT_OCCURRED2) &
UV2H_EVENT_OCCURRED2_RTC_1_MASK;
}
/* Setup interrupt and return non-zero if early expiration occurred. */
@ -114,8 +118,12 @@ static int uv_setup_intr(int cpu, u64 expires)
UVH_RTC1_INT_CONFIG_M_MASK);
uv_write_global_mmr64(pnode, UVH_INT_CMPB, -1L);
uv_write_global_mmr64(pnode, UVH_EVENT_OCCURRED0_ALIAS,
UVH_EVENT_OCCURRED0_RTC1_MASK);
if (is_uv1_hub())
uv_write_global_mmr64(pnode, UVH_EVENT_OCCURRED0_ALIAS,
UV1H_EVENT_OCCURRED0_RTC1_MASK);
else
uv_write_global_mmr64(pnode, UV2H_EVENT_OCCURRED2_ALIAS,
UV2H_EVENT_OCCURRED2_RTC_1_MASK);
val = (X86_PLATFORM_IPI_VECTOR << UVH_RTC1_INT_CONFIG_VECTOR_SHFT) |
((u64)apicid << UVH_RTC1_INT_CONFIG_APIC_ID_SHFT);

View file

@ -21,7 +21,7 @@ static void cfq_dtor(struct io_context *ioc)
if (!hlist_empty(&ioc->cic_list)) {
struct cfq_io_context *cic;
cic = list_entry(ioc->cic_list.first, struct cfq_io_context,
cic = hlist_entry(ioc->cic_list.first, struct cfq_io_context,
cic_list);
cic->dtor(ioc);
}
@ -57,7 +57,7 @@ static void cfq_exit(struct io_context *ioc)
if (!hlist_empty(&ioc->cic_list)) {
struct cfq_io_context *cic;
cic = list_entry(ioc->cic_list.first, struct cfq_io_context,
cic = hlist_entry(ioc->cic_list.first, struct cfq_io_context,
cic_list);
cic->exit(ioc);
}

View file

@ -185,7 +185,7 @@ struct cfq_group {
int nr_cfqq;
/*
* Per group busy queus average. Useful for workload slice calc. We
* Per group busy queues average. Useful for workload slice calc. We
* create the array for each prio class but at run time it is used
* only for RT and BE class and slot for IDLE class remains unused.
* This is primarily done to avoid confusion and a gcc warning.
@ -369,16 +369,16 @@ CFQ_CFQQ_FNS(wait_busy);
#define cfq_log_cfqq(cfqd, cfqq, fmt, args...) \
blk_add_trace_msg((cfqd)->queue, "cfq%d%c %s " fmt, (cfqq)->pid, \
cfq_cfqq_sync((cfqq)) ? 'S' : 'A', \
blkg_path(&(cfqq)->cfqg->blkg), ##args);
blkg_path(&(cfqq)->cfqg->blkg), ##args)
#define cfq_log_cfqg(cfqd, cfqg, fmt, args...) \
blk_add_trace_msg((cfqd)->queue, "%s " fmt, \
blkg_path(&(cfqg)->blkg), ##args); \
blkg_path(&(cfqg)->blkg), ##args) \
#else
#define cfq_log_cfqq(cfqd, cfqq, fmt, args...) \
blk_add_trace_msg((cfqd)->queue, "cfq%d " fmt, (cfqq)->pid, ##args)
#define cfq_log_cfqg(cfqd, cfqg, fmt, args...) do {} while (0);
#define cfq_log_cfqg(cfqd, cfqg, fmt, args...) do {} while (0)
#endif
#define cfq_log(cfqd, fmt, args...) \
blk_add_trace_msg((cfqd)->queue, "cfq " fmt, ##args)
@ -3786,9 +3786,6 @@ new_queue:
return 0;
queue_fail:
if (cic)
put_io_context(cic->ioc);
cfq_schedule_dispatch(cfqd);
spin_unlock_irqrestore(q->queue_lock, flags);
cfq_log(cfqd, "set_request fail");

View file

@ -17,6 +17,9 @@ obj-$(CONFIG_SFI) += sfi/
# was used and do nothing if so
obj-$(CONFIG_PNP) += pnp/
obj-$(CONFIG_ARM_AMBA) += amba/
# Many drivers will want to use DMA so this has to be made available
# really early.
obj-$(CONFIG_DMA_ENGINE) += dma/
obj-$(CONFIG_VIRTIO) += virtio/
obj-$(CONFIG_XEN) += xen/
@ -92,7 +95,6 @@ obj-$(CONFIG_EISA) += eisa/
obj-y += lguest/
obj-$(CONFIG_CPU_FREQ) += cpufreq/
obj-$(CONFIG_CPU_IDLE) += cpuidle/
obj-$(CONFIG_DMA_ENGINE) += dma/
obj-$(CONFIG_MMC) += mmc/
obj-$(CONFIG_MEMSTICK) += memstick/
obj-y += leds/

View file

@ -369,6 +369,21 @@ config ACPI_HED
which is used to report some hardware errors notified via
SCI, mainly the corrected errors.
config ACPI_CUSTOM_METHOD
tristate "Allow ACPI methods to be inserted/replaced at run time"
depends on DEBUG_FS
default n
help
This debug facility allows ACPI AML methods to me inserted and/or
replaced without rebooting the system. For details refer to:
Documentation/acpi/method-customizing.txt.
NOTE: This option is security sensitive, because it allows arbitrary
kernel memory to be written to by root (uid=0) users, allowing them
to bypass certain security measures (e.g. if root is not allowed to
load additional kernel modules after boot, this feature may be used
to override that restriction).
source "drivers/acpi/apei/Kconfig"
endif # ACPI

View file

@ -61,6 +61,7 @@ obj-$(CONFIG_ACPI_SBS) += sbshc.o
obj-$(CONFIG_ACPI_SBS) += sbs.o
obj-$(CONFIG_ACPI_HED) += hed.o
obj-$(CONFIG_ACPI_EC_DEBUGFS) += ec_sys.o
obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o
# processor has its own "processor." module_param namespace
processor-y := processor_driver.o processor_throttling.o

View file

@ -14,7 +14,7 @@ acpi-y := dsfield.o dsmthdat.o dsopcode.o dswexec.o dswscope.o \
acpi-y += evevent.o evregion.o evsci.o evxfevnt.o \
evmisc.o evrgnini.o evxface.o evxfregn.o \
evgpe.o evgpeblk.o evgpeinit.o evgpeutil.o evxfgpe.o
evgpe.o evgpeblk.o evgpeinit.o evgpeutil.o evxfgpe.o evglock.o
acpi-y += exconfig.o exfield.o exnames.o exoparg6.o exresolv.o exstorob.o\
exconvrt.o exfldio.o exoparg1.o exprep.o exresop.o exsystem.o\

View file

@ -187,7 +187,6 @@
/* Operation regions */
#define ACPI_NUM_PREDEFINED_REGIONS 9
#define ACPI_USER_REGION_BEGIN 0x80
/* Maximum space_ids for Operation Regions */

View file

@ -58,18 +58,23 @@ u32 acpi_ev_fixed_event_detect(void);
*/
u8 acpi_ev_is_notify_object(struct acpi_namespace_node *node);
acpi_status acpi_ev_acquire_global_lock(u16 timeout);
acpi_status acpi_ev_release_global_lock(void);
acpi_status acpi_ev_init_global_lock_handler(void);
u32 acpi_ev_get_gpe_number_index(u32 gpe_number);
acpi_status
acpi_ev_queue_notify_request(struct acpi_namespace_node *node,
u32 notify_value);
/*
* evglock - Global Lock support
*/
acpi_status acpi_ev_init_global_lock_handler(void);
acpi_status acpi_ev_acquire_global_lock(u16 timeout);
acpi_status acpi_ev_release_global_lock(void);
acpi_status acpi_ev_remove_global_lock_handler(void);
/*
* evgpe - Low-level GPE support
*/

View file

@ -214,24 +214,23 @@ ACPI_EXTERN struct acpi_mutex_info acpi_gbl_mutex_info[ACPI_NUM_MUTEX];
/*
* Global lock mutex is an actual AML mutex object
* Global lock semaphore works in conjunction with the HW global lock
* Global lock semaphore works in conjunction with the actual global lock
* Global lock spinlock is used for "pending" handshake
*/
ACPI_EXTERN union acpi_operand_object *acpi_gbl_global_lock_mutex;
ACPI_EXTERN acpi_semaphore acpi_gbl_global_lock_semaphore;
ACPI_EXTERN acpi_spinlock acpi_gbl_global_lock_pending_lock;
ACPI_EXTERN u16 acpi_gbl_global_lock_handle;
ACPI_EXTERN u8 acpi_gbl_global_lock_acquired;
ACPI_EXTERN u8 acpi_gbl_global_lock_present;
ACPI_EXTERN u8 acpi_gbl_global_lock_pending;
/*
* Spinlocks are used for interfaces that can be possibly called at
* interrupt level
*/
ACPI_EXTERN spinlock_t _acpi_gbl_gpe_lock; /* For GPE data structs and registers */
ACPI_EXTERN spinlock_t _acpi_gbl_hardware_lock; /* For ACPI H/W except GPE registers */
ACPI_EXTERN spinlock_t _acpi_ev_global_lock_pending_lock; /* For global lock */
#define acpi_gbl_gpe_lock &_acpi_gbl_gpe_lock
#define acpi_gbl_hardware_lock &_acpi_gbl_hardware_lock
#define acpi_ev_global_lock_pending_lock &_acpi_ev_global_lock_pending_lock
ACPI_EXTERN acpi_spinlock acpi_gbl_gpe_lock; /* For GPE data structs and registers */
ACPI_EXTERN acpi_spinlock acpi_gbl_hardware_lock; /* For ACPI H/W except GPE registers */
/*****************************************************************************
*

View file

@ -394,21 +394,6 @@
#define AML_CLASS_METHOD_CALL 0x09
#define AML_CLASS_UNKNOWN 0x0A
/* Predefined Operation Region space_iDs */
typedef enum {
REGION_MEMORY = 0,
REGION_IO,
REGION_PCI_CONFIG,
REGION_EC,
REGION_SMBUS,
REGION_CMOS,
REGION_PCI_BAR,
REGION_IPMI,
REGION_DATA_TABLE, /* Internal use only */
REGION_FIXED_HW = 0x7F
} AML_REGION_TYPES;
/* Comparison operation codes for match_op operator */
typedef enum {

View file

@ -450,7 +450,7 @@ acpi_status acpi_ds_load1_end_op(struct acpi_walk_state *walk_state)
status =
acpi_ex_create_region(op->named.data,
op->named.length,
REGION_DATA_TABLE,
ACPI_ADR_SPACE_DATA_TABLE,
walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);

View file

@ -562,7 +562,7 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
((op->common.value.arg)->common.value.
integer);
} else {
region_space = REGION_DATA_TABLE;
region_space = ACPI_ADR_SPACE_DATA_TABLE;
}
/*

View file

@ -0,0 +1,335 @@
/******************************************************************************
*
* Module Name: evglock - Global Lock support
*
*****************************************************************************/
/*
* Copyright (C) 2000 - 2011, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acevents.h"
#include "acinterp.h"
#define _COMPONENT ACPI_EVENTS
ACPI_MODULE_NAME("evglock")
/* Local prototypes */
static u32 acpi_ev_global_lock_handler(void *context);
/*******************************************************************************
*
* FUNCTION: acpi_ev_init_global_lock_handler
*
* PARAMETERS: None
*
* RETURN: Status
*
* DESCRIPTION: Install a handler for the global lock release event
*
******************************************************************************/
acpi_status acpi_ev_init_global_lock_handler(void)
{
acpi_status status;
ACPI_FUNCTION_TRACE(ev_init_global_lock_handler);
/* Attempt installation of the global lock handler */
status = acpi_install_fixed_event_handler(ACPI_EVENT_GLOBAL,
acpi_ev_global_lock_handler,
NULL);
/*
* If the global lock does not exist on this platform, the attempt to
* enable GBL_STATUS will fail (the GBL_ENABLE bit will not stick).
* Map to AE_OK, but mark global lock as not present. Any attempt to
* actually use the global lock will be flagged with an error.
*/
acpi_gbl_global_lock_present = FALSE;
if (status == AE_NO_HARDWARE_RESPONSE) {
ACPI_ERROR((AE_INFO,
"No response from Global Lock hardware, disabling lock"));
return_ACPI_STATUS(AE_OK);
}
status = acpi_os_create_lock(&acpi_gbl_global_lock_pending_lock);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
acpi_gbl_global_lock_pending = FALSE;
acpi_gbl_global_lock_present = TRUE;
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_remove_global_lock_handler
*
* PARAMETERS: None
*
* RETURN: Status
*
* DESCRIPTION: Remove the handler for the Global Lock
*
******************************************************************************/
acpi_status acpi_ev_remove_global_lock_handler(void)
{
acpi_status status;
ACPI_FUNCTION_TRACE(ev_remove_global_lock_handler);
acpi_gbl_global_lock_present = FALSE;
status = acpi_remove_fixed_event_handler(ACPI_EVENT_GLOBAL,
acpi_ev_global_lock_handler);
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_global_lock_handler
*
* PARAMETERS: Context - From thread interface, not used
*
* RETURN: ACPI_INTERRUPT_HANDLED
*
* DESCRIPTION: Invoked directly from the SCI handler when a global lock
* release interrupt occurs. If there is actually a pending
* request for the lock, signal the waiting thread.
*
******************************************************************************/
static u32 acpi_ev_global_lock_handler(void *context)
{
acpi_status status;
acpi_cpu_flags flags;
flags = acpi_os_acquire_lock(acpi_gbl_global_lock_pending_lock);
/*
* If a request for the global lock is not actually pending,
* we are done. This handles "spurious" global lock interrupts
* which are possible (and have been seen) with bad BIOSs.
*/
if (!acpi_gbl_global_lock_pending) {
goto cleanup_and_exit;
}
/*
* Send a unit to the global lock semaphore. The actual acquisition
* of the global lock will be performed by the waiting thread.
*/
status = acpi_os_signal_semaphore(acpi_gbl_global_lock_semaphore, 1);
if (ACPI_FAILURE(status)) {
ACPI_ERROR((AE_INFO, "Could not signal Global Lock semaphore"));
}
acpi_gbl_global_lock_pending = FALSE;
cleanup_and_exit:
acpi_os_release_lock(acpi_gbl_global_lock_pending_lock, flags);
return (ACPI_INTERRUPT_HANDLED);
}
/******************************************************************************
*
* FUNCTION: acpi_ev_acquire_global_lock
*
* PARAMETERS: Timeout - Max time to wait for the lock, in millisec.
*
* RETURN: Status
*
* DESCRIPTION: Attempt to gain ownership of the Global Lock.
*
* MUTEX: Interpreter must be locked
*
* Note: The original implementation allowed multiple threads to "acquire" the
* Global Lock, and the OS would hold the lock until the last thread had
* released it. However, this could potentially starve the BIOS out of the
* lock, especially in the case where there is a tight handshake between the
* Embedded Controller driver and the BIOS. Therefore, this implementation
* allows only one thread to acquire the HW Global Lock at a time, and makes
* the global lock appear as a standard mutex on the OS side.
*
*****************************************************************************/
acpi_status acpi_ev_acquire_global_lock(u16 timeout)
{
acpi_cpu_flags flags;
acpi_status status;
u8 acquired = FALSE;
ACPI_FUNCTION_TRACE(ev_acquire_global_lock);
/*
* Only one thread can acquire the GL at a time, the global_lock_mutex
* enforces this. This interface releases the interpreter if we must wait.
*/
status =
acpi_ex_system_wait_mutex(acpi_gbl_global_lock_mutex->mutex.
os_mutex, timeout);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/*
* Update the global lock handle and check for wraparound. The handle is
* only used for the external global lock interfaces, but it is updated
* here to properly handle the case where a single thread may acquire the
* lock via both the AML and the acpi_acquire_global_lock interfaces. The
* handle is therefore updated on the first acquire from a given thread
* regardless of where the acquisition request originated.
*/
acpi_gbl_global_lock_handle++;
if (acpi_gbl_global_lock_handle == 0) {
acpi_gbl_global_lock_handle = 1;
}
/*
* Make sure that a global lock actually exists. If not, just
* treat the lock as a standard mutex.
*/
if (!acpi_gbl_global_lock_present) {
acpi_gbl_global_lock_acquired = TRUE;
return_ACPI_STATUS(AE_OK);
}
flags = acpi_os_acquire_lock(acpi_gbl_global_lock_pending_lock);
do {
/* Attempt to acquire the actual hardware lock */
ACPI_ACQUIRE_GLOBAL_LOCK(acpi_gbl_FACS, acquired);
if (acquired) {
acpi_gbl_global_lock_acquired = TRUE;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Acquired hardware Global Lock\n"));
break;
}
/*
* Did not get the lock. The pending bit was set above, and
* we must now wait until we receive the global lock
* released interrupt.
*/
acpi_gbl_global_lock_pending = TRUE;
acpi_os_release_lock(acpi_gbl_global_lock_pending_lock, flags);
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Waiting for hardware Global Lock\n"));
/*
* Wait for handshake with the global lock interrupt handler.
* This interface releases the interpreter if we must wait.
*/
status =
acpi_ex_system_wait_semaphore
(acpi_gbl_global_lock_semaphore, ACPI_WAIT_FOREVER);
flags = acpi_os_acquire_lock(acpi_gbl_global_lock_pending_lock);
} while (ACPI_SUCCESS(status));
acpi_gbl_global_lock_pending = FALSE;
acpi_os_release_lock(acpi_gbl_global_lock_pending_lock, flags);
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_release_global_lock
*
* PARAMETERS: None
*
* RETURN: Status
*
* DESCRIPTION: Releases ownership of the Global Lock.
*
******************************************************************************/
acpi_status acpi_ev_release_global_lock(void)
{
u8 pending = FALSE;
acpi_status status = AE_OK;
ACPI_FUNCTION_TRACE(ev_release_global_lock);
/* Lock must be already acquired */
if (!acpi_gbl_global_lock_acquired) {
ACPI_WARNING((AE_INFO,
"Cannot release the ACPI Global Lock, it has not been acquired"));
return_ACPI_STATUS(AE_NOT_ACQUIRED);
}
if (acpi_gbl_global_lock_present) {
/* Allow any thread to release the lock */
ACPI_RELEASE_GLOBAL_LOCK(acpi_gbl_FACS, pending);
/*
* If the pending bit was set, we must write GBL_RLS to the control
* register
*/
if (pending) {
status =
acpi_write_bit_register
(ACPI_BITREG_GLOBAL_LOCK_RELEASE,
ACPI_ENABLE_EVENT);
}
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Released hardware Global Lock\n"));
}
acpi_gbl_global_lock_acquired = FALSE;
/* Release the local GL mutex */
acpi_os_release_mutex(acpi_gbl_global_lock_mutex->mutex.os_mutex);
return_ACPI_STATUS(status);
}

View file

@ -45,7 +45,6 @@
#include "accommon.h"
#include "acevents.h"
#include "acnamesp.h"
#include "acinterp.h"
#define _COMPONENT ACPI_EVENTS
ACPI_MODULE_NAME("evmisc")
@ -53,10 +52,6 @@ ACPI_MODULE_NAME("evmisc")
/* Local prototypes */
static void ACPI_SYSTEM_XFACE acpi_ev_notify_dispatch(void *context);
static u32 acpi_ev_global_lock_handler(void *context);
static acpi_status acpi_ev_remove_global_lock_handler(void);
/*******************************************************************************
*
* FUNCTION: acpi_ev_is_notify_object
@ -275,304 +270,6 @@ static void ACPI_SYSTEM_XFACE acpi_ev_notify_dispatch(void *context)
acpi_ut_delete_generic_state(notify_info);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_global_lock_handler
*
* PARAMETERS: Context - From thread interface, not used
*
* RETURN: ACPI_INTERRUPT_HANDLED
*
* DESCRIPTION: Invoked directly from the SCI handler when a global lock
* release interrupt occurs. If there's a thread waiting for
* the global lock, signal it.
*
* NOTE: Assumes that the semaphore can be signaled from interrupt level. If
* this is not possible for some reason, a separate thread will have to be
* scheduled to do this.
*
******************************************************************************/
static u8 acpi_ev_global_lock_pending;
static u32 acpi_ev_global_lock_handler(void *context)
{
acpi_status status;
acpi_cpu_flags flags;
flags = acpi_os_acquire_lock(acpi_ev_global_lock_pending_lock);
if (!acpi_ev_global_lock_pending) {
goto out;
}
/* Send a unit to the semaphore */
status = acpi_os_signal_semaphore(acpi_gbl_global_lock_semaphore, 1);
if (ACPI_FAILURE(status)) {
ACPI_ERROR((AE_INFO, "Could not signal Global Lock semaphore"));
}
acpi_ev_global_lock_pending = FALSE;
out:
acpi_os_release_lock(acpi_ev_global_lock_pending_lock, flags);
return (ACPI_INTERRUPT_HANDLED);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_init_global_lock_handler
*
* PARAMETERS: None
*
* RETURN: Status
*
* DESCRIPTION: Install a handler for the global lock release event
*
******************************************************************************/
acpi_status acpi_ev_init_global_lock_handler(void)
{
acpi_status status;
ACPI_FUNCTION_TRACE(ev_init_global_lock_handler);
/* Attempt installation of the global lock handler */
status = acpi_install_fixed_event_handler(ACPI_EVENT_GLOBAL,
acpi_ev_global_lock_handler,
NULL);
/*
* If the global lock does not exist on this platform, the attempt to
* enable GBL_STATUS will fail (the GBL_ENABLE bit will not stick).
* Map to AE_OK, but mark global lock as not present. Any attempt to
* actually use the global lock will be flagged with an error.
*/
if (status == AE_NO_HARDWARE_RESPONSE) {
ACPI_ERROR((AE_INFO,
"No response from Global Lock hardware, disabling lock"));
acpi_gbl_global_lock_present = FALSE;
return_ACPI_STATUS(AE_OK);
}
acpi_gbl_global_lock_present = TRUE;
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_remove_global_lock_handler
*
* PARAMETERS: None
*
* RETURN: Status
*
* DESCRIPTION: Remove the handler for the Global Lock
*
******************************************************************************/
static acpi_status acpi_ev_remove_global_lock_handler(void)
{
acpi_status status;
ACPI_FUNCTION_TRACE(ev_remove_global_lock_handler);
acpi_gbl_global_lock_present = FALSE;
status = acpi_remove_fixed_event_handler(ACPI_EVENT_GLOBAL,
acpi_ev_global_lock_handler);
return_ACPI_STATUS(status);
}
/******************************************************************************
*
* FUNCTION: acpi_ev_acquire_global_lock
*
* PARAMETERS: Timeout - Max time to wait for the lock, in millisec.
*
* RETURN: Status
*
* DESCRIPTION: Attempt to gain ownership of the Global Lock.
*
* MUTEX: Interpreter must be locked
*
* Note: The original implementation allowed multiple threads to "acquire" the
* Global Lock, and the OS would hold the lock until the last thread had
* released it. However, this could potentially starve the BIOS out of the
* lock, especially in the case where there is a tight handshake between the
* Embedded Controller driver and the BIOS. Therefore, this implementation
* allows only one thread to acquire the HW Global Lock at a time, and makes
* the global lock appear as a standard mutex on the OS side.
*
*****************************************************************************/
static acpi_thread_id acpi_ev_global_lock_thread_id;
static int acpi_ev_global_lock_acquired;
acpi_status acpi_ev_acquire_global_lock(u16 timeout)
{
acpi_cpu_flags flags;
acpi_status status = AE_OK;
u8 acquired = FALSE;
ACPI_FUNCTION_TRACE(ev_acquire_global_lock);
/*
* Only one thread can acquire the GL at a time, the global_lock_mutex
* enforces this. This interface releases the interpreter if we must wait.
*/
status = acpi_ex_system_wait_mutex(
acpi_gbl_global_lock_mutex->mutex.os_mutex, 0);
if (status == AE_TIME) {
if (acpi_ev_global_lock_thread_id == acpi_os_get_thread_id()) {
acpi_ev_global_lock_acquired++;
return AE_OK;
}
}
if (ACPI_FAILURE(status)) {
status = acpi_ex_system_wait_mutex(
acpi_gbl_global_lock_mutex->mutex.os_mutex,
timeout);
}
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
acpi_ev_global_lock_thread_id = acpi_os_get_thread_id();
acpi_ev_global_lock_acquired++;
/*
* Update the global lock handle and check for wraparound. The handle is
* only used for the external global lock interfaces, but it is updated
* here to properly handle the case where a single thread may acquire the
* lock via both the AML and the acpi_acquire_global_lock interfaces. The
* handle is therefore updated on the first acquire from a given thread
* regardless of where the acquisition request originated.
*/
acpi_gbl_global_lock_handle++;
if (acpi_gbl_global_lock_handle == 0) {
acpi_gbl_global_lock_handle = 1;
}
/*
* Make sure that a global lock actually exists. If not, just treat the
* lock as a standard mutex.
*/
if (!acpi_gbl_global_lock_present) {
acpi_gbl_global_lock_acquired = TRUE;
return_ACPI_STATUS(AE_OK);
}
flags = acpi_os_acquire_lock(acpi_ev_global_lock_pending_lock);
do {
/* Attempt to acquire the actual hardware lock */
ACPI_ACQUIRE_GLOBAL_LOCK(acpi_gbl_FACS, acquired);
if (acquired) {
acpi_gbl_global_lock_acquired = TRUE;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Acquired hardware Global Lock\n"));
break;
}
acpi_ev_global_lock_pending = TRUE;
acpi_os_release_lock(acpi_ev_global_lock_pending_lock, flags);
/*
* Did not get the lock. The pending bit was set above, and we
* must wait until we get the global lock released interrupt.
*/
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Waiting for hardware Global Lock\n"));
/*
* Wait for handshake with the global lock interrupt handler.
* This interface releases the interpreter if we must wait.
*/
status = acpi_ex_system_wait_semaphore(
acpi_gbl_global_lock_semaphore,
ACPI_WAIT_FOREVER);
flags = acpi_os_acquire_lock(acpi_ev_global_lock_pending_lock);
} while (ACPI_SUCCESS(status));
acpi_ev_global_lock_pending = FALSE;
acpi_os_release_lock(acpi_ev_global_lock_pending_lock, flags);
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_release_global_lock
*
* PARAMETERS: None
*
* RETURN: Status
*
* DESCRIPTION: Releases ownership of the Global Lock.
*
******************************************************************************/
acpi_status acpi_ev_release_global_lock(void)
{
u8 pending = FALSE;
acpi_status status = AE_OK;
ACPI_FUNCTION_TRACE(ev_release_global_lock);
/* Lock must be already acquired */
if (!acpi_gbl_global_lock_acquired) {
ACPI_WARNING((AE_INFO,
"Cannot release the ACPI Global Lock, it has not been acquired"));
return_ACPI_STATUS(AE_NOT_ACQUIRED);
}
acpi_ev_global_lock_acquired--;
if (acpi_ev_global_lock_acquired > 0) {
return AE_OK;
}
if (acpi_gbl_global_lock_present) {
/* Allow any thread to release the lock */
ACPI_RELEASE_GLOBAL_LOCK(acpi_gbl_FACS, pending);
/*
* If the pending bit was set, we must write GBL_RLS to the control
* register
*/
if (pending) {
status =
acpi_write_bit_register
(ACPI_BITREG_GLOBAL_LOCK_RELEASE,
ACPI_ENABLE_EVENT);
}
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Released hardware Global Lock\n"));
}
acpi_gbl_global_lock_acquired = FALSE;
/* Release the local GL mutex */
acpi_ev_global_lock_thread_id = 0;
acpi_ev_global_lock_acquired = 0;
acpi_os_release_mutex(acpi_gbl_global_lock_mutex->mutex.os_mutex);
return_ACPI_STATUS(status);
}
/******************************************************************************
*
* FUNCTION: acpi_ev_terminate

Some files were not shown because too many files have changed in this diff Show more