1
0
Fork 0

Merge commit 'v2.6.39-rc4' into sched/core

Merge reason: Pick up upstream fixes.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
hifive-unleashed-5.1
Ingo Molnar 2011-04-21 11:39:21 +02:00
commit 42ac9e87fd
199 changed files with 2306 additions and 1161 deletions

View File

@ -0,0 +1,262 @@
The input protocol uses a map of types and codes to express input device values
to userspace. This document describes the types and codes and how and when they
may be used.
A single hardware event generates multiple input events. Each input event
contains the new value of a single data item. A special event type, EV_SYN, is
used to separate input events into packets of input data changes occurring at
the same moment in time. In the following, the term "event" refers to a single
input event encompassing a type, code, and value.
The input protocol is a stateful protocol. Events are emitted only when values
of event codes have changed. However, the state is maintained within the Linux
input subsystem; drivers do not need to maintain the state and may attempt to
emit unchanged values without harm. Userspace may obtain the current state of
event code values using the EVIOCG* ioctls defined in linux/input.h. The event
reports supported by a device are also provided by sysfs in
class/input/event*/device/capabilities/, and the properties of a device are
provided in class/input/event*/device/properties.
Types:
==========
Types are groupings of codes under a logical input construct. Each type has a
set of applicable codes to be used in generating events. See the Codes section
for details on valid codes for each type.
* EV_SYN:
- Used as markers to separate events. Events may be separated in time or in
space, such as with the multitouch protocol.
* EV_KEY:
- Used to describe state changes of keyboards, buttons, or other key-like
devices.
* EV_REL:
- Used to describe relative axis value changes, e.g. moving the mouse 5 units
to the left.
* EV_ABS:
- Used to describe absolute axis value changes, e.g. describing the
coordinates of a touch on a touchscreen.
* EV_MSC:
- Used to describe miscellaneous input data that do not fit into other types.
* EV_SW:
- Used to describe binary state input switches.
* EV_LED:
- Used to turn LEDs on devices on and off.
* EV_SND:
- Used to output sound to devices.
* EV_REP:
- Used for autorepeating devices.
* EV_FF:
- Used to send force feedback commands to an input device.
* EV_PWR:
- A special type for power button and switch input.
* EV_FF_STATUS:
- Used to receive force feedback device status.
Codes:
==========
Codes define the precise type of event.
EV_SYN:
----------
EV_SYN event values are undefined. Their usage is defined only by when they are
sent in the evdev event stream.
* SYN_REPORT:
- Used to synchronize and separate events into packets of input data changes
occurring at the same moment in time. For example, motion of a mouse may set
the REL_X and REL_Y values for one motion, then emit a SYN_REPORT. The next
motion will emit more REL_X and REL_Y values and send another SYN_REPORT.
* SYN_CONFIG:
- TBD
* SYN_MT_REPORT:
- Used to synchronize and separate touch events. See the
multi-touch-protocol.txt document for more information.
* SYN_DROPPED:
- Used to indicate buffer overrun in the evdev client's event queue.
Client should ignore all events up to and including next SYN_REPORT
event and query the device (using EVIOCG* ioctls) to obtain its
current state.
EV_KEY:
----------
EV_KEY events take the form KEY_<name> or BTN_<name>. For example, KEY_A is used
to represent the 'A' key on a keyboard. When a key is depressed, an event with
the key's code is emitted with value 1. When the key is released, an event is
emitted with value 0. Some hardware send events when a key is repeated. These
events have a value of 2. In general, KEY_<name> is used for keyboard keys, and
BTN_<name> is used for other types of momentary switch events.
A few EV_KEY codes have special meanings:
* BTN_TOOL_<name>:
- These codes are used in conjunction with input trackpads, tablets, and
touchscreens. These devices may be used with fingers, pens, or other tools.
When an event occurs and a tool is used, the corresponding BTN_TOOL_<name>
code should be set to a value of 1. When the tool is no longer interacting
with the input device, the BTN_TOOL_<name> code should be reset to 0. All
trackpads, tablets, and touchscreens should use at least one BTN_TOOL_<name>
code when events are generated.
* BTN_TOUCH:
BTN_TOUCH is used for touch contact. While an input tool is determined to be
within meaningful physical contact, the value of this property must be set
to 1. Meaningful physical contact may mean any contact, or it may mean
contact conditioned by an implementation defined property. For example, a
touchpad may set the value to 1 only when the touch pressure rises above a
certain value. BTN_TOUCH may be combined with BTN_TOOL_<name> codes. For
example, a pen tablet may set BTN_TOOL_PEN to 1 and BTN_TOUCH to 0 while the
pen is hovering over but not touching the tablet surface.
Note: For appropriate function of the legacy mousedev emulation driver,
BTN_TOUCH must be the first evdev code emitted in a synchronization frame.
Note: Historically a touch device with BTN_TOOL_FINGER and BTN_TOUCH was
interpreted as a touchpad by userspace, while a similar device without
BTN_TOOL_FINGER was interpreted as a touchscreen. For backwards compatibility
with current userspace it is recommended to follow this distinction. In the
future, this distinction will be deprecated and the device properties ioctl
EVIOCGPROP, defined in linux/input.h, will be used to convey the device type.
* BTN_TOOL_FINGER, BTN_TOOL_DOUBLETAP, BTN_TOOL_TRIPLETAP, BTN_TOOL_QUADTAP:
- These codes denote one, two, three, and four finger interaction on a
trackpad or touchscreen. For example, if the user uses two fingers and moves
them on the touchpad in an effort to scroll content on screen,
BTN_TOOL_DOUBLETAP should be set to value 1 for the duration of the motion.
Note that all BTN_TOOL_<name> codes and the BTN_TOUCH code are orthogonal in
purpose. A trackpad event generated by finger touches should generate events
for one code from each group. At most only one of these BTN_TOOL_<name>
codes should have a value of 1 during any synchronization frame.
Note: Historically some drivers emitted multiple of the finger count codes with
a value of 1 in the same synchronization frame. This usage is deprecated.
Note: In multitouch drivers, the input_mt_report_finger_count() function should
be used to emit these codes. Please see multi-touch-protocol.txt for details.
EV_REL:
----------
EV_REL events describe relative changes in a property. For example, a mouse may
move to the left by a certain number of units, but its absolute position in
space is unknown. If the absolute position is known, EV_ABS codes should be used
instead of EV_REL codes.
A few EV_REL codes have special meanings:
* REL_WHEEL, REL_HWHEEL:
- These codes are used for vertical and horizontal scroll wheels,
respectively.
EV_ABS:
----------
EV_ABS events describe absolute changes in a property. For example, a touchpad
may emit coordinates for a touch location.
A few EV_ABS codes have special meanings:
* ABS_DISTANCE:
- Used to describe the distance of a tool from an interaction surface. This
event should only be emitted while the tool is hovering, meaning in close
proximity of the device and while the value of the BTN_TOUCH code is 0. If
the input device may be used freely in three dimensions, consider ABS_Z
instead.
* ABS_MT_<name>:
- Used to describe multitouch input events. Please see
multi-touch-protocol.txt for details.
EV_SW:
----------
EV_SW events describe stateful binary switches. For example, the SW_LID code is
used to denote when a laptop lid is closed.
Upon binding to a device or resuming from suspend, a driver must report
the current switch state. This ensures that the device, kernel, and userspace
state is in sync.
Upon resume, if the switch state is the same as before suspend, then the input
subsystem will filter out the duplicate switch state reports. The driver does
not need to keep the state of the switch at any time.
EV_MSC:
----------
EV_MSC events are used for input and output events that do not fall under other
categories.
EV_LED:
----------
EV_LED events are used for input and output to set and query the state of
various LEDs on devices.
EV_REP:
----------
EV_REP events are used for specifying autorepeating events.
EV_SND:
----------
EV_SND events are used for sending sound commands to simple sound output
devices.
EV_FF:
----------
EV_FF events are used to initialize a force feedback capable device and to cause
such device to feedback.
EV_PWR:
----------
EV_PWR events are a special type of event used specifically for power
mangement. Its usage is not well defined. To be addressed later.
Guidelines:
==========
The guidelines below ensure proper single-touch and multi-finger functionality.
For multi-touch functionality, see the multi-touch-protocol.txt document for
more information.
Mice:
----------
REL_{X,Y} must be reported when the mouse moves. BTN_LEFT must be used to report
the primary button press. BTN_{MIDDLE,RIGHT,4,5,etc.} should be used to report
further buttons of the device. REL_WHEEL and REL_HWHEEL should be used to report
scroll wheel events where available.
Touchscreens:
----------
ABS_{X,Y} must be reported with the location of the touch. BTN_TOUCH must be
used to report when a touch is active on the screen.
BTN_{MOUSE,LEFT,MIDDLE,RIGHT} must not be reported as the result of touch
contact. BTN_TOOL_<name> events should be reported where possible.
Trackpads:
----------
Legacy trackpads that only provide relative position information must report
events like mice described above.
Trackpads that provide absolute touch position must report ABS_{X,Y} for the
location of the touch. BTN_TOUCH should be used to report when a touch is active
on the trackpad. Where multi-finger support is available, BTN_TOOL_<name> should
be used to report the number of touches active on the trackpad.
Tablets:
----------
BTN_TOOL_<name> events must be reported when a stylus or other tool is active on
the tablet. ABS_{X,Y} must be reported with the location of the tool. BTN_TOUCH
should be used to report when the tool is in contact with the tablet.
BTN_{STYLUS,STYLUS2} should be used to report buttons on the tool itself. Any
button may be used for buttons on the tablet except BTN_{MOUSE,LEFT}.
BTN_{0,1,2,etc} are good generic codes for unlabeled buttons. Do not use
meaningful buttons, like BTN_FORWARD, unless the button is labeled for that
purpose on the device.

View File

@ -184,10 +184,9 @@ F: Documentation/filesystems/9p.txt
F: fs/9p/
A2232 SERIAL BOARD DRIVER
M: Enver Haase <A2232@gmx.net>
L: linux-m68k@lists.linux-m68k.org
S: Maintained
F: drivers/char/ser_a2232*
S: Orphan
F: drivers/staging/generic_serial/ser_a2232*
AACRAID SCSI RAID DRIVER
M: Adaptec OEM Raid Solutions <aacraid@adaptec.com>
@ -877,6 +876,13 @@ F: arch/arm/mach-mv78xx0/
F: arch/arm/mach-orion5x/
F: arch/arm/plat-orion/
ARM/Orion SoC/Technologic Systems TS-78xx platform support
M: Alexander Clouter <alex@digriz.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.digriz.org.uk/ts78xx/kernel
S: Maintained
F: arch/arm/mach-orion5x/ts78xx-*
ARM/MIOA701 MACHINE SUPPORT
M: Robert Jarzmik <robert.jarzmik@free.fr>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -1063,7 +1069,7 @@ F: arch/arm/mach-shmobile/
F: drivers/sh/
ARM/TELECHIPS ARM ARCHITECTURE
M: "Hans J. Koch" <hjk@linutronix.de>
M: "Hans J. Koch" <hjk@hansjkoch.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/plat-tcc/
@ -1823,11 +1829,10 @@ S: Maintained
F: drivers/platform/x86/compal-laptop.c
COMPUTONE INTELLIPORT MULTIPORT CARD
M: "Michael H. Warfield" <mhw@wittsend.com>
W: http://www.wittsend.com/computone.html
S: Maintained
S: Orphan
F: Documentation/serial/computone.txt
F: drivers/char/ip2/
F: drivers/staging/tty/ip2/
CONEXANT ACCESSRUNNER USB DRIVER
M: Simon Arlott <cxacru@fire.lp0.eu>
@ -2010,7 +2015,7 @@ F: drivers/net/wan/cycx*
CYCLADES ASYNC MUX DRIVER
W: http://www.cyclades.com/
S: Orphan
F: drivers/char/cyclades.c
F: drivers/tty/cyclades.c
F: include/linux/cyclades.h
CYCLADES PC300 DRIVER
@ -2124,8 +2129,8 @@ L: Eng.Linux@digi.com
W: http://www.digi.com
S: Orphan
F: Documentation/serial/digiepca.txt
F: drivers/char/epca*
F: drivers/char/digi*
F: drivers/staging/tty/epca*
F: drivers/staging/tty/digi*
DIOLAN U2C-12 I2C DRIVER
M: Guenter Roeck <guenter.roeck@ericsson.com>
@ -4077,7 +4082,7 @@ F: drivers/video/matrox/matroxfb_*
F: include/linux/matroxfb.h
MAX6650 HARDWARE MONITOR AND FAN CONTROLLER DRIVER
M: "Hans J. Koch" <hjk@linutronix.de>
M: "Hans J. Koch" <hjk@hansjkoch.de>
L: lm-sensors@lm-sensors.org
S: Maintained
F: Documentation/hwmon/max6650
@ -4192,7 +4197,7 @@ MOXA SMARTIO/INDUSTIO/INTELLIO SERIAL CARD
M: Jiri Slaby <jirislaby@gmail.com>
S: Maintained
F: Documentation/serial/moxa-smartio
F: drivers/char/mxser.*
F: drivers/tty/mxser.*
MSI LAPTOP SUPPORT
M: "Lee, Chun-Yi" <jlee@novell.com>
@ -4234,7 +4239,7 @@ F: sound/oss/msnd*
MULTITECH MULTIPORT CARD (ISICOM)
S: Orphan
F: drivers/char/isicom.c
F: drivers/tty/isicom.c
F: include/linux/isicom.h
MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER
@ -5273,14 +5278,14 @@ F: drivers/memstick/host/r592.*
RISCOM8 DRIVER
S: Orphan
F: Documentation/serial/riscom8.txt
F: drivers/char/riscom8*
F: drivers/staging/tty/riscom8*
ROCKETPORT DRIVER
P: Comtrol Corp.
W: http://www.comtrol.com
S: Maintained
F: Documentation/serial/rocket.txt
F: drivers/char/rocket*
F: drivers/tty/rocket*
ROSE NETWORK LAYER
M: Ralf Baechle <ralf@linux-mips.org>
@ -5916,10 +5921,9 @@ F: arch/arm/mach-spear6xx/spear600.c
F: arch/arm/mach-spear6xx/spear600_evb.c
SPECIALIX IO8+ MULTIPORT SERIAL CARD DRIVER
M: Roger Wolff <R.E.Wolff@BitWizard.nl>
S: Supported
S: Orphan
F: Documentation/serial/specialix.txt
F: drivers/char/specialix*
F: drivers/staging/tty/specialix*
SPI SUBSYSTEM
M: David Brownell <dbrownell@users.sourceforge.net>
@ -5964,7 +5968,6 @@ F: arch/alpha/kernel/srm_env.c
STABLE BRANCH
M: Greg Kroah-Hartman <greg@kroah.com>
M: Chris Wright <chrisw@sous-sol.org>
L: stable@kernel.org
S: Maintained
@ -6248,7 +6251,8 @@ M: Greg Ungerer <gerg@uclinux.org>
W: http://www.uclinux.org/
L: uclinux-dev@uclinux.org (subscribers-only)
S: Maintained
F: arch/m68knommu/
F: arch/m68k/*/*_no.*
F: arch/m68k/include/asm/*_no.*
UCLINUX FOR RENESAS H8/300 (H8300)
M: Yoshinori Sato <ysato@users.sourceforge.jp>
@ -6618,7 +6622,7 @@ F: fs/hostfs/
F: fs/hppfs/
USERSPACE I/O (UIO)
M: "Hans J. Koch" <hjk@linutronix.de>
M: "Hans J. Koch" <hjk@hansjkoch.de>
M: Greg Kroah-Hartman <gregkh@suse.de>
S: Maintained
F: Documentation/DocBook/uio-howto.tmpl

View File

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 39
EXTRAVERSION = -rc3
EXTRAVERSION = -rc4
NAME = Flesh-Eating Bats with Fangs
# *DOCUMENTATION*

View File

@ -4,7 +4,7 @@
extra-y := head.o vmlinux.lds
asflags-y := $(KBUILD_CFLAGS)
ccflags-y := -Werror -Wno-sign-compare
ccflags-y := -Wno-sign-compare
obj-y := entry.o traps.o process.o init_task.o osf_sys.o irq.o \
irq_alpha.o signal.o setup.o ptrace.o time.o \

View File

@ -88,7 +88,7 @@ conf_read(unsigned long addr, unsigned char type1,
{
unsigned long flags;
unsigned long mid = MCPCIA_HOSE2MID(hose->index);
unsigned int stat0, value, temp, cpu;
unsigned int stat0, value, cpu;
cpu = smp_processor_id();
@ -101,7 +101,7 @@ conf_read(unsigned long addr, unsigned char type1,
stat0 = *(vuip)MCPCIA_CAP_ERR(mid);
*(vuip)MCPCIA_CAP_ERR(mid) = stat0;
mb();
temp = *(vuip)MCPCIA_CAP_ERR(mid);
*(vuip)MCPCIA_CAP_ERR(mid);
DBG_CFG(("conf_read: MCPCIA_CAP_ERR(%d) was 0x%x\n", mid, stat0));
mb();
@ -136,7 +136,7 @@ conf_write(unsigned long addr, unsigned int value, unsigned char type1,
{
unsigned long flags;
unsigned long mid = MCPCIA_HOSE2MID(hose->index);
unsigned int stat0, temp, cpu;
unsigned int stat0, cpu;
cpu = smp_processor_id();
@ -145,7 +145,7 @@ conf_write(unsigned long addr, unsigned int value, unsigned char type1,
/* Reset status register to avoid losing errors. */
stat0 = *(vuip)MCPCIA_CAP_ERR(mid);
*(vuip)MCPCIA_CAP_ERR(mid) = stat0; mb();
temp = *(vuip)MCPCIA_CAP_ERR(mid);
*(vuip)MCPCIA_CAP_ERR(mid);
DBG_CFG(("conf_write: MCPCIA CAP_ERR(%d) was 0x%x\n", mid, stat0));
draina();
@ -157,7 +157,7 @@ conf_write(unsigned long addr, unsigned int value, unsigned char type1,
*((vuip)addr) = value;
mb();
mb(); /* magic */
temp = *(vuip)MCPCIA_CAP_ERR(mid); /* read to force the write */
*(vuip)MCPCIA_CAP_ERR(mid); /* read to force the write */
mcheck_expected(cpu) = 0;
mb();
@ -572,12 +572,10 @@ mcpcia_print_system_area(unsigned long la_ptr)
void
mcpcia_machine_check(unsigned long vector, unsigned long la_ptr)
{
struct el_common *mchk_header;
struct el_MCPCIA_uncorrected_frame_mcheck *mchk_logout;
unsigned int cpu = smp_processor_id();
int expected;
mchk_header = (struct el_common *)la_ptr;
mchk_logout = (struct el_MCPCIA_uncorrected_frame_mcheck *)la_ptr;
expected = mcheck_expected(cpu);

View File

@ -533,8 +533,6 @@ static struct el_subpacket_annotation el_titan_annotations[] = {
static struct el_subpacket *
el_process_regatta_subpacket(struct el_subpacket *header)
{
int status;
if (header->class != EL_CLASS__REGATTA_FAMILY) {
printk("%s ** Unexpected header CLASS %d TYPE %d, aborting\n",
err_print_prefix,
@ -551,7 +549,7 @@ el_process_regatta_subpacket(struct el_subpacket *header)
printk("%s ** Occurred on CPU %d:\n",
err_print_prefix,
(int)header->by_type.regatta_frame.cpuid);
status = privateer_process_logout_frame((struct el_common *)
privateer_process_logout_frame((struct el_common *)
header->by_type.regatta_frame.data_start, 1);
break;
default:

View File

@ -228,7 +228,7 @@ struct irqaction timer_irqaction = {
void __init
init_rtc_irq(void)
{
irq_set_chip_and_handler_name(RTC_IRQ, &no_irq_chip,
irq_set_chip_and_handler_name(RTC_IRQ, &dummy_irq_chip,
handle_simple_irq, "RTC");
setup_irq(RTC_IRQ, &timer_irqaction);
}

View File

@ -1404,8 +1404,6 @@ determine_cpu_caches (unsigned int cpu_type)
case PCA56_CPU:
case PCA57_CPU:
{
unsigned long cbox_config, size;
if (cpu_type == PCA56_CPU) {
L1I = CSHAPE(16*1024, 6, 1);
L1D = CSHAPE(8*1024, 5, 1);
@ -1415,10 +1413,12 @@ determine_cpu_caches (unsigned int cpu_type)
}
L3 = -1;
#if 0
unsigned long cbox_config, size;
cbox_config = *(vulp) phys_to_virt (0xfffff00008UL);
size = 512*1024 * (1 << ((cbox_config >> 12) & 3));
#if 0
L2 = ((cbox_config >> 31) & 1 ? CSHAPE (size, 6, 1) : -1);
#else
L2 = external_cache_probe(512*1024, 6);

View File

@ -79,7 +79,6 @@
static unsigned long __init SMCConfigState(unsigned long baseAddr)
{
unsigned char devId;
unsigned char devRev;
unsigned long configPort;
unsigned long indexPort;
@ -100,7 +99,7 @@ static unsigned long __init SMCConfigState(unsigned long baseAddr)
devId = inb(dataPort);
if (devId == VALID_DEVICE_ID) {
outb(DEVICE_REV, indexPort);
devRev = inb(dataPort);
/* unsigned char devRev = */ inb(dataPort);
break;
}
else

View File

@ -156,7 +156,6 @@ static void __init
wildfire_init_irq_per_pca(int qbbno, int pcano)
{
int i, irq_bias;
unsigned long io_bias;
static struct irqaction isa_enable = {
.handler = no_action,
.name = "isa_enable",
@ -165,10 +164,12 @@ wildfire_init_irq_per_pca(int qbbno, int pcano)
irq_bias = qbbno * (WILDFIRE_PCA_PER_QBB * WILDFIRE_IRQ_PER_PCA)
+ pcano * WILDFIRE_IRQ_PER_PCA;
#if 0
unsigned long io_bias;
/* Only need the following for first PCI bus per PCA. */
io_bias = WILDFIRE_IO(qbbno, pcano<<1) - WILDFIRE_IO_BIAS;
#if 0
outb(0, DMA1_RESET_REG + io_bias);
outb(0, DMA2_RESET_REG + io_bias);
outb(DMA_MODE_CASCADE, DMA2_MODE_REG + io_bias);

View File

@ -153,6 +153,7 @@ void read_persistent_clock(struct timespec *ts)
year += 100;
ts->tv_sec = mktime(year, mon, day, hour, min, sec);
ts->tv_nsec = 0;
}

View File

@ -1540,7 +1540,6 @@ config HIGHMEM
config HIGHPTE
bool "Allocate 2nd-level pagetables from highmem"
depends on HIGHMEM
depends on !OUTER_CACHE
config HW_PERF_EVENTS
bool "Enable hardware performance counter support for perf events"
@ -2012,6 +2011,8 @@ source "kernel/power/Kconfig"
config ARCH_SUSPEND_POSSIBLE
depends on !ARCH_S5P64X0 && !ARCH_S5P6442
depends on CPU_ARM920T || CPU_ARM926T || CPU_SA1100 || \
CPU_V6 || CPU_V6K || CPU_V7 || CPU_XSC3 || CPU_XSCALE
def_bool y
endmenu

View File

@ -63,17 +63,6 @@ config DEBUG_USER
8 - SIGSEGV faults
16 - SIGBUS faults
config DEBUG_ERRORS
bool "Verbose kernel error messages"
depends on DEBUG_KERNEL
help
This option controls verbose debugging information which can be
printed when the kernel detects an internal error. This debugging
information is useful to kernel hackers when tracking down problems,
but mostly meaningless to other people. It's safe to say Y unless
you are concerned with the code size or don't want to see these
messages.
config DEBUG_STACK_USAGE
bool "Enable stack utilization instrumentation"
depends on DEBUG_KERNEL

View File

@ -16,5 +16,4 @@ obj-$(CONFIG_SHARP_SCOOP) += scoop.o
obj-$(CONFIG_ARCH_IXP2000) += uengine.o
obj-$(CONFIG_ARCH_IXP23XX) += uengine.o
obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o
obj-$(CONFIG_COMMON_CLKDEV) += clkdev.o
obj-$(CONFIG_ARM_TIMER_SP804) += timer-sp.o

View File

@ -43,6 +43,7 @@ static inline void thread_notify(unsigned long rc, struct thread_info *thread)
#define THREAD_NOTIFY_FLUSH 0
#define THREAD_NOTIFY_EXIT 1
#define THREAD_NOTIFY_SWITCH 2
#define THREAD_NOTIFY_COPY 3
#endif
#endif

View File

@ -29,7 +29,7 @@ obj-$(CONFIG_MODULES) += armksyms.o module.o
obj-$(CONFIG_ARTHUR) += arthur.o
obj-$(CONFIG_ISA_DMA) += dma-isa.o
obj-$(CONFIG_PCI) += bios32.o isa.o
obj-$(CONFIG_PM) += sleep.o
obj-$(CONFIG_PM_SLEEP) += sleep.o
obj-$(CONFIG_HAVE_SCHED_CLOCK) += sched_clock.o
obj-$(CONFIG_SMP) += smp.o smp_tlb.o
obj-$(CONFIG_HAVE_ARM_SCU) += smp_scu.o

View File

@ -40,15 +40,22 @@ EXPORT_SYMBOL(elf_check_arch);
void elf_set_personality(const struct elf32_hdr *x)
{
unsigned int eflags = x->e_flags;
unsigned int personality = PER_LINUX_32BIT;
unsigned int personality = current->personality & ~PER_MASK;
/*
* We only support Linux ELF executables, so always set the
* personality to LINUX.
*/
personality |= PER_LINUX;
/*
* APCS-26 is only valid for OABI executables
*/
if ((eflags & EF_ARM_EABI_MASK) == EF_ARM_EABI_UNKNOWN) {
if (eflags & EF_ARM_APCS_26)
personality = PER_LINUX;
}
if ((eflags & EF_ARM_EABI_MASK) == EF_ARM_EABI_UNKNOWN &&
(eflags & EF_ARM_APCS_26))
personality &= ~ADDR_LIMIT_32BIT;
else
personality |= ADDR_LIMIT_32BIT;
set_personality(personality);

View File

@ -868,6 +868,13 @@ static void reset_ctrl_regs(void *info)
*/
asm volatile("mcr p14, 0, %0, c1, c0, 4" : : "r" (0));
isb();
/*
* Clear any configured vector-catch events before
* enabling monitor mode.
*/
asm volatile("mcr p14, 0, %0, c0, c7, 0" : : "r" (0));
isb();
}
if (enable_monitor_mode())

View File

@ -221,7 +221,7 @@ again:
prev_raw_count &= armpmu->max_period;
if (overflow)
delta = armpmu->max_period - prev_raw_count + new_raw_count;
delta = armpmu->max_period - prev_raw_count + new_raw_count + 1;
else
delta = new_raw_count - prev_raw_count;

View File

@ -372,6 +372,8 @@ copy_thread(unsigned long clone_flags, unsigned long stack_start,
if (clone_flags & CLONE_SETTLS)
thread->tp_value = regs->ARM_r3;
thread_notify(THREAD_NOTIFY_COPY, thread);
return 0;
}

View File

@ -410,8 +410,7 @@ static int bad_syscall(int n, struct pt_regs *regs)
struct thread_info *thread = current_thread_info();
siginfo_t info;
if (current->personality != PER_LINUX &&
current->personality != PER_LINUX_32BIT &&
if ((current->personality & PER_MASK) != PER_LINUX &&
thread->exec_domain->handler) {
thread->exec_domain->handler(n, regs);
return regs->ARM_r0;

View File

@ -10,7 +10,7 @@
#define BANK_OFF(n) (((n) < 3) ? (n) << 2 : 0x100 + (((n) - 3) << 2))
#define GPIO_REG(x) (*((volatile u32 *)(GPIO_REGS_VIRT + (x))))
#define NR_BUILTIN_GPIO (192)
#define NR_BUILTIN_GPIO IRQ_GPIO_NUM
#define gpio_to_bank(gpio) ((gpio) >> 5)
#define gpio_to_irq(gpio) (IRQ_GPIO_START + (gpio))

View File

@ -8,6 +8,15 @@
#define MFP_DRIVE_MEDIUM (0x2 << 13)
#define MFP_DRIVE_FAST (0x3 << 13)
#undef MFP_CFG
#undef MFP_CFG_DRV
#define MFP_CFG(pin, af) \
(MFP_LPM_INPUT | MFP_PIN(MFP_PIN_##pin) | MFP_##af | MFP_DRIVE_MEDIUM)
#define MFP_CFG_DRV(pin, af, drv) \
(MFP_LPM_INPUT | MFP_PIN(MFP_PIN_##pin) | MFP_##af | MFP_DRIVE_##drv)
/* GPIO */
#define GPIO0_GPIO MFP_CFG(GPIO0, AF5)
#define GPIO1_GPIO MFP_CFG(GPIO1, AF5)

View File

@ -160,10 +160,7 @@ static struct msm_mmc_platform_data qsd8x50_sdc1_data = {
static void __init qsd8x50_init_mmc(void)
{
if (machine_is_qsd8x50_ffa() || machine_is_qsd8x50a_ffa())
vreg_mmc = vreg_get(NULL, "gp6");
else
vreg_mmc = vreg_get(NULL, "gp5");
vreg_mmc = vreg_get(NULL, "gp5");
if (IS_ERR(vreg_mmc)) {
pr_err("vreg get for vreg_mmc failed (%ld)\n",

View File

@ -269,7 +269,7 @@ int __cpuinit local_timer_setup(struct clock_event_device *evt)
/* Use existing clock_event for cpu 0 */
if (!smp_processor_id())
return;
return 0;
writel(DGT_CLK_CTL_DIV_4, MSM_TMR_BASE + DGT_CLK_CTL);

View File

@ -99,11 +99,24 @@
#define GAFR(x) GPIO_REG(0x54 + (((x) & 0x70) >> 2))
#define NR_BUILTIN_GPIO 128
#define NR_BUILTIN_GPIO PXA_GPIO_IRQ_NUM
#define gpio_to_bank(gpio) ((gpio) >> 5)
#define gpio_to_irq(gpio) IRQ_GPIO(gpio)
#define irq_to_gpio(irq) IRQ_TO_GPIO(irq)
static inline int irq_to_gpio(unsigned int irq)
{
int gpio;
if (irq == IRQ_GPIO0 || irq == IRQ_GPIO1)
return irq - IRQ_GPIO0;
gpio = irq - PXA_GPIO_IRQ_BASE;
if (gpio >= 2 && gpio < NR_BUILTIN_GPIO)
return gpio;
return -1;
}
#ifdef CONFIG_CPU_PXA26x
/* GPIO86/87/88/89 on PXA26x have their direction bits in GPDR2 inverted,

View File

@ -93,9 +93,6 @@
#define GPIO_2_x_TO_IRQ(x) (PXA_GPIO_IRQ_BASE + (x))
#define IRQ_GPIO(x) (((x) < 2) ? (IRQ_GPIO0 + (x)) : GPIO_2_x_TO_IRQ(x))
#define IRQ_TO_GPIO_2_x(i) ((i) - PXA_GPIO_IRQ_BASE)
#define IRQ_TO_GPIO(i) (((i) < IRQ_GPIO(2)) ? ((i) - IRQ_GPIO0) : IRQ_TO_GPIO_2_x(i))
/*
* The following interrupts are for board specific purposes. Since
* the kernel can only run on one machine at a time, we can re-use

View File

@ -285,7 +285,7 @@ static inline void pxa25x_init_pm(void) {}
static int pxa25x_set_wake(struct irq_data *d, unsigned int on)
{
int gpio = IRQ_TO_GPIO(d->irq);
int gpio = irq_to_gpio(d->irq);
uint32_t mask = 0;
if (gpio >= 0 && gpio < 85)

View File

@ -345,7 +345,7 @@ static inline void pxa27x_init_pm(void) {}
*/
static int pxa27x_set_wake(struct irq_data *d, unsigned int on)
{
int gpio = IRQ_TO_GPIO(d->irq);
int gpio = irq_to_gpio(d->irq);
uint32_t mask;
if (gpio >= 0 && gpio < 128)

View File

@ -257,7 +257,8 @@ static void tegra_gpio_irq_handler(unsigned int irq, struct irq_desc *desc)
void tegra_gpio_resume(void)
{
unsigned long flags;
int b, p, i;
int b;
int p;
local_irq_save(flags);
@ -280,7 +281,8 @@ void tegra_gpio_resume(void)
void tegra_gpio_suspend(void)
{
unsigned long flags;
int b, p, i;
int b;
int p;
local_irq_save(flags);
for (b = 0; b < ARRAY_SIZE(tegra_gpio_banks); b++) {

View File

@ -1362,14 +1362,15 @@ static int tegra_clk_shared_bus_set_rate(struct clk *c, unsigned long rate)
{
unsigned long flags;
int ret;
long new_rate = rate;
rate = clk_round_rate(c->parent, rate);
if (rate < 0)
return rate;
new_rate = clk_round_rate(c->parent, new_rate);
if (new_rate < 0)
return new_rate;
spin_lock_irqsave(&c->parent->spinlock, flags);
c->u.shared_bus_user.rate = rate;
c->u.shared_bus_user.rate = new_rate;
ret = tegra_clk_shared_bus_update(c->parent);
spin_unlock_irqrestore(&c->parent->spinlock, flags);

View File

@ -7,6 +7,7 @@
#include <linux/shm.h>
#include <linux/sched.h>
#include <linux/io.h>
#include <linux/personality.h>
#include <linux/random.h>
#include <asm/cputype.h>
#include <asm/system.h>
@ -82,7 +83,8 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
mm->cached_hole_size = 0;
}
/* 8 bits of randomness in 20 address space bits */
if (current->flags & PF_RANDOMIZE)
if ((current->flags & PF_RANDOMIZE) &&
!(current->personality & ADDR_NO_RANDOMIZE))
addr += (get_random_int() % (1 << 8)) << PAGE_SHIFT;
full_search:

View File

@ -390,7 +390,7 @@ ENTRY(cpu_arm920_set_pte_ext)
/* Suspend/resume support: taken from arch/arm/plat-s3c24xx/sleep.S */
.globl cpu_arm920_suspend_size
.equ cpu_arm920_suspend_size, 4 * 3
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_arm920_do_suspend)
stmfd sp!, {r4 - r7, lr}
mrc p15, 0, r4, c13, c0, 0 @ PID

View File

@ -404,7 +404,7 @@ ENTRY(cpu_arm926_set_pte_ext)
/* Suspend/resume support: taken from arch/arm/plat-s3c24xx/sleep.S */
.globl cpu_arm926_suspend_size
.equ cpu_arm926_suspend_size, 4 * 3
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_arm926_do_suspend)
stmfd sp!, {r4 - r7, lr}
mrc p15, 0, r4, c13, c0, 0 @ PID

View File

@ -171,7 +171,7 @@ ENTRY(cpu_sa1100_set_pte_ext)
.globl cpu_sa1100_suspend_size
.equ cpu_sa1100_suspend_size, 4*4
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_sa1100_do_suspend)
stmfd sp!, {r4 - r7, lr}
mrc p15, 0, r4, c3, c0, 0 @ domain ID

View File

@ -124,7 +124,7 @@ ENTRY(cpu_v6_set_pte_ext)
/* Suspend/resume support: taken from arch/arm/mach-s3c64xx/sleep.S */
.globl cpu_v6_suspend_size
.equ cpu_v6_suspend_size, 4 * 8
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_v6_do_suspend)
stmfd sp!, {r4 - r11, lr}
mrc p15, 0, r4, c13, c0, 0 @ FCSE/PID

View File

@ -211,7 +211,7 @@ cpu_v7_name:
/* Suspend/resume support: derived from arch/arm/mach-s5pv210/sleep.S */
.globl cpu_v7_suspend_size
.equ cpu_v7_suspend_size, 4 * 8
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_v7_do_suspend)
stmfd sp!, {r4 - r11, lr}
mrc p15, 0, r4, c13, c0, 0 @ FCSE/PID

View File

@ -417,7 +417,7 @@ ENTRY(cpu_xsc3_set_pte_ext)
.globl cpu_xsc3_suspend_size
.equ cpu_xsc3_suspend_size, 4 * 8
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_xsc3_do_suspend)
stmfd sp!, {r4 - r10, lr}
mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode

View File

@ -518,7 +518,7 @@ ENTRY(cpu_xscale_set_pte_ext)
.globl cpu_xscale_suspend_size
.equ cpu_xscale_suspend_size, 4 * 7
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_xscale_do_suspend)
stmfd sp!, {r4 - r10, lr}
mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode

View File

@ -19,17 +19,6 @@
#define PFX "s5p pm: "
/* s3c_pm_check_resume_pin
*
* check to see if the pin is configured correctly for sleep mode, and
* make any necessary adjustments if it is not
*/
static void s3c_pm_check_resume_pin(unsigned int pin, unsigned int irqoffs)
{
/* nothing here yet */
}
/* s3c_pm_configure_extint
*
* configure all external interrupt pins

View File

@ -164,7 +164,6 @@ static inline int in_region(void *ptr, int size, void *what, size_t whatsz)
*/
static u32 *s3c_pm_runcheck(struct resource *res, u32 *val)
{
void *save_at = phys_to_virt(s3c_sleep_save_phys);
unsigned long addr;
unsigned long left;
void *stkpage;
@ -192,11 +191,6 @@ static u32 *s3c_pm_runcheck(struct resource *res, u32 *val)
goto skip_check;
}
if (in_region(ptr, left, save_at, 32*4 )) {
S3C_PMDBG("skipping %08lx, has save block in\n", addr);
goto skip_check;
}
/* calculate and check the checksum */
calc = crc32_le(~0, ptr, left);

View File

@ -214,8 +214,9 @@ void s3c_pm_do_restore_core(struct sleep_save *ptr, int count)
*
* print any IRQs asserted at resume time (ie, we woke from)
*/
static void s3c_pm_show_resume_irqs(int start, unsigned long which,
unsigned long mask)
static void __maybe_unused s3c_pm_show_resume_irqs(int start,
unsigned long which,
unsigned long mask)
{
int i;

View File

@ -78,6 +78,14 @@ static void vfp_thread_exit(struct thread_info *thread)
put_cpu();
}
static void vfp_thread_copy(struct thread_info *thread)
{
struct thread_info *parent = current_thread_info();
vfp_sync_hwstate(parent);
thread->vfpstate = parent->vfpstate;
}
/*
* When this function is called with the following 'cmd's, the following
* is true while this function is being run:
@ -104,12 +112,17 @@ static void vfp_thread_exit(struct thread_info *thread)
static int vfp_notifier(struct notifier_block *self, unsigned long cmd, void *v)
{
struct thread_info *thread = v;
u32 fpexc;
#ifdef CONFIG_SMP
unsigned int cpu;
#endif
if (likely(cmd == THREAD_NOTIFY_SWITCH)) {
u32 fpexc = fmrx(FPEXC);
switch (cmd) {
case THREAD_NOTIFY_SWITCH:
fpexc = fmrx(FPEXC);
#ifdef CONFIG_SMP
unsigned int cpu = thread->cpu;
cpu = thread->cpu;
/*
* On SMP, if VFP is enabled, save the old state in
@ -134,13 +147,20 @@ static int vfp_notifier(struct notifier_block *self, unsigned long cmd, void *v)
* old state.
*/
fmxr(FPEXC, fpexc & ~FPEXC_EN);
return NOTIFY_DONE;
}
break;
if (cmd == THREAD_NOTIFY_FLUSH)
case THREAD_NOTIFY_FLUSH:
vfp_thread_flush(thread);
else
break;
case THREAD_NOTIFY_EXIT:
vfp_thread_exit(thread);
break;
case THREAD_NOTIFY_COPY:
vfp_thread_copy(thread);
break;
}
return NOTIFY_DONE;
}

View File

@ -19,11 +19,11 @@
* Force strict CPU ordering.
*/
#define nop() __asm__ __volatile__ ("nop;\n\t" : : )
#define mb() __asm__ __volatile__ ("" : : : "memory")
#define rmb() __asm__ __volatile__ ("" : : : "memory")
#define wmb() __asm__ __volatile__ ("" : : : "memory")
#define set_mb(var, value) do { (void) xchg(&var, value); } while (0)
#define read_barrier_depends() do { } while(0)
#define smp_mb() mb()
#define smp_rmb() rmb()
#define smp_wmb() wmb()
#define set_mb(var, value) do { var = value; mb(); } while (0)
#define smp_read_barrier_depends() read_barrier_depends()
#ifdef CONFIG_SMP
asmlinkage unsigned long __raw_xchg_1_asm(volatile void *ptr, unsigned long value);
@ -37,16 +37,16 @@ asmlinkage unsigned long __raw_cmpxchg_4_asm(volatile void *ptr,
unsigned long new, unsigned long old);
#ifdef __ARCH_SYNC_CORE_DCACHE
# define smp_mb() do { barrier(); smp_check_barrier(); smp_mark_barrier(); } while (0)
# define smp_rmb() do { barrier(); smp_check_barrier(); } while (0)
# define smp_wmb() do { barrier(); smp_mark_barrier(); } while (0)
#define smp_read_barrier_depends() do { barrier(); smp_check_barrier(); } while (0)
/* Force Core data cache coherence */
# define mb() do { barrier(); smp_check_barrier(); smp_mark_barrier(); } while (0)
# define rmb() do { barrier(); smp_check_barrier(); } while (0)
# define wmb() do { barrier(); smp_mark_barrier(); } while (0)
# define read_barrier_depends() do { barrier(); smp_check_barrier(); } while (0)
#else
# define smp_mb() barrier()
# define smp_rmb() barrier()
# define smp_wmb() barrier()
#define smp_read_barrier_depends() barrier()
# define mb() barrier()
# define rmb() barrier()
# define wmb() barrier()
# define read_barrier_depends() do { } while (0)
#endif
static inline unsigned long __xchg(unsigned long x, volatile void *ptr,
@ -99,10 +99,10 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
#else /* !CONFIG_SMP */
#define smp_mb() barrier()
#define smp_rmb() barrier()
#define smp_wmb() barrier()
#define smp_read_barrier_depends() do { } while(0)
#define mb() barrier()
#define rmb() barrier()
#define wmb() barrier()
#define read_barrier_depends() do { } while (0)
struct __xchg_dummy {
unsigned long a[100];

View File

@ -268,7 +268,7 @@ void disable_gptimers(uint16_t mask)
_disable_gptimers(mask);
for (i = 0; i < MAX_BLACKFIN_GPTIMERS; ++i)
if (mask & (1 << i))
group_regs[BFIN_TIMER_OCTET(i)]->status |= trun_mask[i];
group_regs[BFIN_TIMER_OCTET(i)]->status = trun_mask[i];
SSYNC();
}
EXPORT_SYMBOL(disable_gptimers);

View File

@ -206,8 +206,14 @@ irqreturn_t bfin_gptmr0_interrupt(int irq, void *dev_id)
{
struct clock_event_device *evt = dev_id;
smp_mb();
evt->event_handler(evt);
/*
* We want to ACK before we handle so that we can handle smaller timer
* intervals. This way if the timer expires again while we're handling
* things, we're more likely to see that 2nd int rather than swallowing
* it by ACKing the int at the end of this handler.
*/
bfin_gptmr0_ack();
evt->event_handler(evt);
return IRQ_HANDLED;
}

View File

@ -109,10 +109,23 @@ static void ipi_flush_icache(void *info)
struct blackfin_flush_data *fdata = info;
/* Invalidate the memory holding the bounds of the flushed region. */
invalidate_dcache_range((unsigned long)fdata,
(unsigned long)fdata + sizeof(*fdata));
blackfin_dcache_invalidate_range((unsigned long)fdata,
(unsigned long)fdata + sizeof(*fdata));
flush_icache_range(fdata->start, fdata->end);
/* Make sure all write buffers in the data side of the core
* are flushed before trying to invalidate the icache. This
* needs to be after the data flush and before the icache
* flush so that the SSYNC does the right thing in preventing
* the instruction prefetcher from hitting things in cached
* memory at the wrong time -- it runs much further ahead than
* the pipeline.
*/
SSYNC();
/* ipi_flaush_icache is invoked by generic flush_icache_range,
* so call blackfin arch icache flush directly here.
*/
blackfin_icache_flush_range(fdata->start, fdata->end);
}
static void ipi_call_function(unsigned int cpu, struct ipi_message *msg)

View File

@ -6,7 +6,6 @@ config MICROBLAZE
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
select USB_ARCH_HAS_EHCI
select ARCH_WANT_OPTIONAL_GPIOLIB
select HAVE_OPROFILE
select HAVE_ARCH_KGDB

View File

@ -209,7 +209,7 @@ config ARCH_HIBERNATION_POSSIBLE
config ARCH_SUSPEND_POSSIBLE
def_bool y
depends on ADB_PMU || PPC_EFIKA || PPC_LITE5200 || PPC_83xx || \
PPC_85xx || PPC_86xx || PPC_PSERIES || 44x || 40x
(PPC_85xx && !SMP) || PPC_86xx || PPC_PSERIES || 44x || 40x
config PPC_DCR_NATIVE
bool

View File

@ -382,10 +382,12 @@ extern const char *powerpc_base_platform;
#define CPU_FTRS_E500_2 (CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \
CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
#define CPU_FTRS_E500MC (CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_NODSISRALIGN | \
#define CPU_FTRS_E500MC (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | \
CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
CPU_FTR_DBELL)
#define CPU_FTRS_E5500 (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | \
CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
CPU_FTR_DBELL | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD)
#define CPU_FTRS_GENERIC_32 (CPU_FTR_COMMON | CPU_FTR_NODSISRALIGN)
/* 64-bit CPUs */
@ -435,11 +437,15 @@ extern const char *powerpc_base_platform;
#define CPU_FTRS_COMPATIBLE (CPU_FTR_USE_TB | CPU_FTR_PPCAS_ARCH_V2)
#ifdef __powerpc64__
#ifdef CONFIG_PPC_BOOK3E
#define CPU_FTRS_POSSIBLE (CPU_FTRS_E5500)
#else
#define CPU_FTRS_POSSIBLE \
(CPU_FTRS_POWER3 | CPU_FTRS_RS64 | CPU_FTRS_POWER4 | \
CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | CPU_FTRS_POWER6 | \
CPU_FTRS_POWER7 | CPU_FTRS_CELL | CPU_FTRS_PA6T | \
CPU_FTR_1T_SEGMENT | CPU_FTR_VSX)
#endif
#else
enum {
CPU_FTRS_POSSIBLE =
@ -473,16 +479,21 @@ enum {
#endif
#ifdef CONFIG_E500
CPU_FTRS_E500 | CPU_FTRS_E500_2 | CPU_FTRS_E500MC |
CPU_FTRS_E5500 |
#endif
0,
};
#endif /* __powerpc64__ */
#ifdef __powerpc64__
#ifdef CONFIG_PPC_BOOK3E
#define CPU_FTRS_ALWAYS (CPU_FTRS_E5500)
#else
#define CPU_FTRS_ALWAYS \
(CPU_FTRS_POWER3 & CPU_FTRS_RS64 & CPU_FTRS_POWER4 & \
CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & CPU_FTRS_POWER6 & \
CPU_FTRS_POWER7 & CPU_FTRS_CELL & CPU_FTRS_PA6T & CPU_FTRS_POSSIBLE)
#endif
#else
enum {
CPU_FTRS_ALWAYS =
@ -513,6 +524,7 @@ enum {
#endif
#ifdef CONFIG_E500
CPU_FTRS_E500 & CPU_FTRS_E500_2 & CPU_FTRS_E500MC &
CPU_FTRS_E5500 &
#endif
CPU_FTRS_POSSIBLE,
};

View File

@ -162,7 +162,7 @@ extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
* on platforms where such control is possible.
*/
#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
defined(CONFIG_KPROBES)
defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
#else
#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX

View File

@ -1973,7 +1973,7 @@ static struct cpu_spec __initdata cpu_specs[] = {
.pvr_mask = 0xffff0000,
.pvr_value = 0x80240000,
.cpu_name = "e5500",
.cpu_features = CPU_FTRS_E500MC,
.cpu_features = CPU_FTRS_E5500,
.cpu_user_features = COMMON_USER_BOOKE,
.mmu_features = MMU_FTR_TYPE_FSL_E | MMU_FTR_BIG_PHYS |
MMU_FTR_USE_TLBILX,

View File

@ -163,7 +163,7 @@ static void crash_kexec_prepare_cpus(int cpu)
}
/* wait for all the CPUs to hit real mode but timeout if they don't come in */
#if defined(CONFIG_PPC_STD_MMU_64) && defined(CONFIG_SMP)
#ifdef CONFIG_PPC_STD_MMU_64
static void crash_kexec_wait_realmode(int cpu)
{
unsigned int msecs;
@ -188,9 +188,7 @@ static void crash_kexec_wait_realmode(int cpu)
}
mb();
}
#else
static inline void crash_kexec_wait_realmode(int cpu) {}
#endif
#endif /* CONFIG_PPC_STD_MMU_64 */
/*
* This function will be called by secondary cpus or by kexec cpu
@ -235,7 +233,9 @@ void crash_kexec_secondary(struct pt_regs *regs)
crash_ipi_callback(regs);
}
#else
#else /* ! CONFIG_SMP */
static inline void crash_kexec_wait_realmode(int cpu) {}
static void crash_kexec_prepare_cpus(int cpu)
{
/*
@ -255,7 +255,7 @@ void crash_kexec_secondary(struct pt_regs *regs)
{
cpus_in_sr = CPU_MASK_NONE;
}
#endif
#endif /* CONFIG_SMP */
/*
* Register a function to be called on shutdown. Only use this if you

View File

@ -330,9 +330,11 @@ void __init find_legacy_serial_ports(void)
if (!parent)
continue;
if (of_match_node(legacy_serial_parents, parent) != NULL) {
index = add_legacy_soc_port(np, np);
if (index >= 0 && np == stdout)
legacy_serial_console = index;
if (of_device_is_available(np)) {
index = add_legacy_soc_port(np, np);
if (index >= 0 && np == stdout)
legacy_serial_console = index;
}
}
of_node_put(parent);
}

View File

@ -398,6 +398,25 @@ static int check_excludes(struct perf_event **ctrs, unsigned int cflags[],
return 0;
}
static u64 check_and_compute_delta(u64 prev, u64 val)
{
u64 delta = (val - prev) & 0xfffffffful;
/*
* POWER7 can roll back counter values, if the new value is smaller
* than the previous value it will cause the delta and the counter to
* have bogus values unless we rolled a counter over. If a coutner is
* rolled back, it will be smaller, but within 256, which is the maximum
* number of events to rollback at once. If we dectect a rollback
* return 0. This can lead to a small lack of precision in the
* counters.
*/
if (prev > val && (prev - val) < 256)
delta = 0;
return delta;
}
static void power_pmu_read(struct perf_event *event)
{
s64 val, delta, prev;
@ -416,10 +435,11 @@ static void power_pmu_read(struct perf_event *event)
prev = local64_read(&event->hw.prev_count);
barrier();
val = read_pmc(event->hw.idx);
delta = check_and_compute_delta(prev, val);
if (!delta)
return;
} while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev);
/* The counters are only 32 bits wide */
delta = (val - prev) & 0xfffffffful;
local64_add(delta, &event->count);
local64_sub(delta, &event->hw.period_left);
}
@ -449,8 +469,9 @@ static void freeze_limited_counters(struct cpu_hw_events *cpuhw,
val = (event->hw.idx == 5) ? pmc5 : pmc6;
prev = local64_read(&event->hw.prev_count);
event->hw.idx = 0;
delta = (val - prev) & 0xfffffffful;
local64_add(delta, &event->count);
delta = check_and_compute_delta(prev, val);
if (delta)
local64_add(delta, &event->count);
}
}
@ -458,14 +479,16 @@ static void thaw_limited_counters(struct cpu_hw_events *cpuhw,
unsigned long pmc5, unsigned long pmc6)
{
struct perf_event *event;
u64 val;
u64 val, prev;
int i;
for (i = 0; i < cpuhw->n_limited; ++i) {
event = cpuhw->limited_counter[i];
event->hw.idx = cpuhw->limited_hwidx[i];
val = (event->hw.idx == 5) ? pmc5 : pmc6;
local64_set(&event->hw.prev_count, val);
prev = local64_read(&event->hw.prev_count);
if (check_and_compute_delta(prev, val))
local64_set(&event->hw.prev_count, val);
perf_event_update_userpage(event);
}
}
@ -1197,7 +1220,7 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
/* we don't have to worry about interrupts here */
prev = local64_read(&event->hw.prev_count);
delta = (val - prev) & 0xfffffffful;
delta = check_and_compute_delta(prev, val);
local64_add(delta, &event->count);
/*

View File

@ -229,6 +229,9 @@ static u64 scan_dispatch_log(u64 stop_tb)
u64 stolen = 0;
u64 dtb;
if (!dtl)
return 0;
if (i == vpa->dtl_idx)
return 0;
while (i < vpa->dtl_idx) {

View File

@ -842,6 +842,7 @@ static void __devinit smp_core99_setup_cpu(int cpu_nr)
mpic_setup_this_cpu();
}
#ifdef CONFIG_PPC64
#ifdef CONFIG_HOTPLUG_CPU
static int smp_core99_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
@ -879,7 +880,6 @@ static struct notifier_block __cpuinitdata smp_core99_cpu_nb = {
static void __init smp_core99_bringup_done(void)
{
#ifdef CONFIG_PPC64
extern void g5_phy_disable_cpu1(void);
/* Close i2c bus if it was used for tb sync */
@ -894,14 +894,14 @@ static void __init smp_core99_bringup_done(void)
set_cpu_present(1, false);
g5_phy_disable_cpu1();
}
#endif /* CONFIG_PPC64 */
#ifdef CONFIG_HOTPLUG_CPU
register_cpu_notifier(&smp_core99_cpu_nb);
#endif
if (ppc_md.progress)
ppc_md.progress("smp_core99_bringup_done", 0x349);
}
#endif /* CONFIG_PPC64 */
#ifdef CONFIG_HOTPLUG_CPU
@ -975,7 +975,9 @@ static void pmac_cpu_die(void)
struct smp_ops_t core99_smp_ops = {
.message_pass = smp_mpic_message_pass,
.probe = smp_core99_probe,
#ifdef CONFIG_PPC64
.bringup_done = smp_core99_bringup_done,
#endif
.kick_cpu = smp_core99_kick_cpu,
.setup_cpu = smp_core99_setup_cpu,
.give_timebase = smp_core99_give_timebase,

View File

@ -287,14 +287,22 @@ static int alloc_dispatch_logs(void)
int cpu, ret;
struct paca_struct *pp;
struct dtl_entry *dtl;
struct kmem_cache *dtl_cache;
if (!firmware_has_feature(FW_FEATURE_SPLPAR))
return 0;
dtl_cache = kmem_cache_create("dtl", DISPATCH_LOG_BYTES,
DISPATCH_LOG_BYTES, 0, NULL);
if (!dtl_cache) {
pr_warn("Failed to create dispatch trace log buffer cache\n");
pr_warn("Stolen time statistics will be unreliable\n");
return 0;
}
for_each_possible_cpu(cpu) {
pp = &paca[cpu];
dtl = kmalloc_node(DISPATCH_LOG_BYTES, GFP_KERNEL,
cpu_to_node(cpu));
dtl = kmem_cache_alloc(dtl_cache, GFP_KERNEL);
if (!dtl) {
pr_warn("Failed to allocate dispatch trace log for cpu %d\n",
cpu);

View File

@ -324,6 +324,11 @@ int __init fsl_add_bridge(struct device_node *dev, int is_primary)
struct resource rsrc;
const int *bus_range;
if (!of_device_is_available(dev)) {
pr_warning("%s: disabled\n", dev->full_name);
return -ENODEV;
}
pr_debug("Adding PCI host bridge %s\n", dev->full_name);
/* Fetch host bridge registers address */

View File

@ -1457,7 +1457,6 @@ int fsl_rio_setup(struct platform_device *dev)
port->ops = ops;
port->priv = priv;
port->phys_efptr = 0x100;
rio_register_mport(port);
priv->regs_win = ioremap(regs.start, regs.end - regs.start + 1);
rio_regs_win = priv->regs_win;
@ -1504,6 +1503,9 @@ int fsl_rio_setup(struct platform_device *dev)
dev_info(&dev->dev, "RapidIO Common Transport System size: %d\n",
port->sys_size ? 65536 : 256);
if (rio_register_mport(port))
goto err;
if (port->host_deviceid >= 0)
out_be32(priv->regs_win + RIO_GCCSR, RIO_PORT_GEN_HOST |
RIO_PORT_GEN_MASTER | RIO_PORT_GEN_DISCOVERED);

View File

@ -4,6 +4,10 @@ menu "UML-specific options"
menu "Host processor type and features"
config CMPXCHG_LOCAL
bool
default n
source "arch/x86/Kconfig.cpu"
endmenu

View File

@ -0,0 +1,6 @@
#ifndef __UM_BUG_H
#define __UM_BUG_H
#include <asm-generic/bug.h>
#endif

View File

@ -96,11 +96,15 @@
#define MSR_IA32_MC0_ADDR 0x00000402
#define MSR_IA32_MC0_MISC 0x00000403
#define MSR_AMD64_MC0_MASK 0xc0010044
#define MSR_IA32_MCx_CTL(x) (MSR_IA32_MC0_CTL + 4*(x))
#define MSR_IA32_MCx_STATUS(x) (MSR_IA32_MC0_STATUS + 4*(x))
#define MSR_IA32_MCx_ADDR(x) (MSR_IA32_MC0_ADDR + 4*(x))
#define MSR_IA32_MCx_MISC(x) (MSR_IA32_MC0_MISC + 4*(x))
#define MSR_AMD64_MCx_MASK(x) (MSR_AMD64_MC0_MASK + (x))
/* These are consecutive and not in the normal 4er MCE bank block */
#define MSR_IA32_MC0_CTL2 0x00000280
#define MSR_IA32_MCx_CTL2(x) (MSR_IA32_MC0_CTL2 + (x))

View File

@ -615,6 +615,25 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
/* As a rule processors have APIC timer running in deep C states */
if (c->x86 >= 0xf && !cpu_has_amd_erratum(amd_erratum_400))
set_cpu_cap(c, X86_FEATURE_ARAT);
/*
* Disable GART TLB Walk Errors on Fam10h. We do this here
* because this is always needed when GART is enabled, even in a
* kernel which has no MCE support built in.
*/
if (c->x86 == 0x10) {
/*
* BIOS should disable GartTlbWlk Errors themself. If
* it doesn't do it here as suggested by the BKDG.
*
* Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=33012
*/
u64 mask;
rdmsrl(MSR_AMD64_MCx_MASK(4), mask);
mask |= (1 << 10);
wrmsrl(MSR_AMD64_MCx_MASK(4), mask);
}
}
#ifdef CONFIG_X86_32

View File

@ -312,6 +312,26 @@ void __cpuinit smp_store_cpu_info(int id)
identify_secondary_cpu(c);
}
static void __cpuinit check_cpu_siblings_on_same_node(int cpu1, int cpu2)
{
int node1 = early_cpu_to_node(cpu1);
int node2 = early_cpu_to_node(cpu2);
/*
* Our CPU scheduler assumes all logical cpus in the same physical cpu
* share the same node. But, buggy ACPI or NUMA emulation might assign
* them to different node. Fix it.
*/
if (node1 != node2) {
pr_warning("CPU %d in node %d and CPU %d in node %d are in the same physical CPU. forcing same node %d\n",
cpu1, node1, cpu2, node2, node2);
numa_remove_cpu(cpu1);
numa_set_node(cpu1, node2);
numa_add_cpu(cpu1);
}
}
static void __cpuinit link_thread_siblings(int cpu1, int cpu2)
{
cpumask_set_cpu(cpu1, cpu_sibling_mask(cpu2));
@ -320,6 +340,7 @@ static void __cpuinit link_thread_siblings(int cpu1, int cpu2)
cpumask_set_cpu(cpu2, cpu_core_mask(cpu1));
cpumask_set_cpu(cpu1, cpu_llc_shared_mask(cpu2));
cpumask_set_cpu(cpu2, cpu_llc_shared_mask(cpu1));
check_cpu_siblings_on_same_node(cpu1, cpu2);
}
@ -361,10 +382,12 @@ void __cpuinit set_cpu_sibling_map(int cpu)
per_cpu(cpu_llc_id, cpu) == per_cpu(cpu_llc_id, i)) {
cpumask_set_cpu(i, cpu_llc_shared_mask(cpu));
cpumask_set_cpu(cpu, cpu_llc_shared_mask(i));
check_cpu_siblings_on_same_node(cpu, i);
}
if (c->phys_proc_id == cpu_data(i).phys_proc_id) {
cpumask_set_cpu(i, cpu_core_mask(cpu));
cpumask_set_cpu(cpu, cpu_core_mask(i));
check_cpu_siblings_on_same_node(cpu, i);
/*
* Does this new cpu bringup a new core?
*/

View File

@ -74,6 +74,7 @@
compatible = "intel,ce4100-pci", "pci";
device_type = "pci";
bus-range = <1 1>;
reg = <0x0800 0x0 0x0 0x0 0x0>;
ranges = <0x2000000 0 0xdffe0000 0x2000000 0 0xdffe0000 0 0x1000>;
interrupt-parent = <&ioapic2>;
@ -412,6 +413,7 @@
#address-cells = <2>;
#size-cells = <1>;
compatible = "isa";
reg = <0xf800 0x0 0x0 0x0 0x0>;
ranges = <1 0 0 0 0 0x100>;
rtc@70 {

View File

@ -97,11 +97,11 @@ static int __init sfi_parse_mtmr(struct sfi_table_header *table)
pentry->freq_hz, pentry->irq);
if (!pentry->irq)
continue;
mp_irq.type = MP_IOAPIC;
mp_irq.type = MP_INTSRC;
mp_irq.irqtype = mp_INT;
/* triggering mode edge bit 2-3, active high polarity bit 0-1 */
mp_irq.irqflag = 5;
mp_irq.srcbus = 0;
mp_irq.srcbus = MP_BUS_ISA;
mp_irq.srcbusirq = pentry->irq; /* IRQ */
mp_irq.dstapic = MP_APIC_ALL;
mp_irq.dstirq = pentry->irq;
@ -168,10 +168,10 @@ int __init sfi_parse_mrtc(struct sfi_table_header *table)
for (totallen = 0; totallen < sfi_mrtc_num; totallen++, pentry++) {
pr_debug("RTC[%d]: paddr = 0x%08x, irq = %d\n",
totallen, (u32)pentry->phys_addr, pentry->irq);
mp_irq.type = MP_IOAPIC;
mp_irq.type = MP_INTSRC;
mp_irq.irqtype = mp_INT;
mp_irq.irqflag = 0xf; /* level trigger and active low */
mp_irq.srcbus = 0;
mp_irq.srcbus = MP_BUS_ISA;
mp_irq.srcbusirq = pentry->irq; /* IRQ */
mp_irq.dstapic = MP_APIC_ALL;
mp_irq.dstirq = pentry->irq;
@ -282,7 +282,7 @@ void __init x86_mrst_early_setup(void)
/* Avoid searching for BIOS MP tables */
x86_init.mpparse.find_smp_config = x86_init_noop;
x86_init.mpparse.get_smp_config = x86_init_uint_noop;
set_bit(MP_BUS_ISA, mp_bus_not_pci);
}
/*

View File

@ -198,26 +198,13 @@ void blk_dump_rq_flags(struct request *rq, char *msg)
}
EXPORT_SYMBOL(blk_dump_rq_flags);
/*
* Make sure that plugs that were pending when this function was entered,
* are now complete and requests pushed to the queue.
*/
static inline void queue_sync_plugs(struct request_queue *q)
{
/*
* If the current process is plugged and has barriers submitted,
* we will livelock if we don't unplug first.
*/
blk_flush_plug(current);
}
static void blk_delay_work(struct work_struct *work)
{
struct request_queue *q;
q = container_of(work, struct request_queue, delay_work.work);
spin_lock_irq(q->queue_lock);
__blk_run_queue(q, false);
__blk_run_queue(q);
spin_unlock_irq(q->queue_lock);
}
@ -233,7 +220,8 @@ static void blk_delay_work(struct work_struct *work)
*/
void blk_delay_queue(struct request_queue *q, unsigned long msecs)
{
schedule_delayed_work(&q->delay_work, msecs_to_jiffies(msecs));
queue_delayed_work(kblockd_workqueue, &q->delay_work,
msecs_to_jiffies(msecs));
}
EXPORT_SYMBOL(blk_delay_queue);
@ -251,7 +239,7 @@ void blk_start_queue(struct request_queue *q)
WARN_ON(!irqs_disabled());
queue_flag_clear(QUEUE_FLAG_STOPPED, q);
__blk_run_queue(q, false);
__blk_run_queue(q);
}
EXPORT_SYMBOL(blk_start_queue);
@ -298,7 +286,6 @@ void blk_sync_queue(struct request_queue *q)
{
del_timer_sync(&q->timeout);
cancel_delayed_work_sync(&q->delay_work);
queue_sync_plugs(q);
}
EXPORT_SYMBOL(blk_sync_queue);
@ -310,9 +297,8 @@ EXPORT_SYMBOL(blk_sync_queue);
* Description:
* See @blk_run_queue. This variant must be called with the queue lock
* held and interrupts disabled.
*
*/
void __blk_run_queue(struct request_queue *q, bool force_kblockd)
void __blk_run_queue(struct request_queue *q)
{
if (unlikely(blk_queue_stopped(q)))
return;
@ -321,7 +307,7 @@ void __blk_run_queue(struct request_queue *q, bool force_kblockd)
* Only recurse once to avoid overrunning the stack, let the unplug
* handling reinvoke the handler shortly if we already got there.
*/
if (!force_kblockd && !queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) {
if (!queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) {
q->request_fn(q);
queue_flag_clear(QUEUE_FLAG_REENTER, q);
} else
@ -329,6 +315,20 @@ void __blk_run_queue(struct request_queue *q, bool force_kblockd)
}
EXPORT_SYMBOL(__blk_run_queue);
/**
* blk_run_queue_async - run a single device queue in workqueue context
* @q: The queue to run
*
* Description:
* Tells kblockd to perform the equivalent of @blk_run_queue on behalf
* of us.
*/
void blk_run_queue_async(struct request_queue *q)
{
if (likely(!blk_queue_stopped(q)))
queue_delayed_work(kblockd_workqueue, &q->delay_work, 0);
}
/**
* blk_run_queue - run a single device queue
* @q: The queue to run
@ -342,7 +342,7 @@ void blk_run_queue(struct request_queue *q)
unsigned long flags;
spin_lock_irqsave(q->queue_lock, flags);
__blk_run_queue(q, false);
__blk_run_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
}
EXPORT_SYMBOL(blk_run_queue);
@ -991,7 +991,7 @@ void blk_insert_request(struct request_queue *q, struct request *rq,
blk_queue_end_tag(q, rq);
add_acct_request(q, rq, where);
__blk_run_queue(q, false);
__blk_run_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
}
EXPORT_SYMBOL(blk_insert_request);
@ -1311,7 +1311,15 @@ get_rq:
plug = current->plug;
if (plug) {
if (!plug->should_sort && !list_empty(&plug->list)) {
/*
* If this is the first request added after a plug, fire
* of a plug trace. If others have been added before, check
* if we have multiple devices in this plug. If so, make a
* note to sort the list before dispatch.
*/
if (list_empty(&plug->list))
trace_block_plug(q);
else if (!plug->should_sort) {
struct request *__rq;
__rq = list_entry_rq(plug->list.prev);
@ -1327,7 +1335,7 @@ get_rq:
} else {
spin_lock_irq(q->queue_lock);
add_acct_request(q, req, where);
__blk_run_queue(q, false);
__blk_run_queue(q);
out_unlock:
spin_unlock_irq(q->queue_lock);
}
@ -2644,6 +2652,7 @@ void blk_start_plug(struct blk_plug *plug)
plug->magic = PLUG_MAGIC;
INIT_LIST_HEAD(&plug->list);
INIT_LIST_HEAD(&plug->cb_list);
plug->should_sort = 0;
/*
@ -2668,33 +2677,93 @@ static int plug_rq_cmp(void *priv, struct list_head *a, struct list_head *b)
return !(rqa->q <= rqb->q);
}
static void flush_plug_list(struct blk_plug *plug)
/*
* If 'from_schedule' is true, then postpone the dispatch of requests
* until a safe kblockd context. We due this to avoid accidental big
* additional stack usage in driver dispatch, in places where the originally
* plugger did not intend it.
*/
static void queue_unplugged(struct request_queue *q, unsigned int depth,
bool from_schedule)
__releases(q->queue_lock)
{
trace_block_unplug(q, depth, !from_schedule);
/*
* If we are punting this to kblockd, then we can safely drop
* the queue_lock before waking kblockd (which needs to take
* this lock).
*/
if (from_schedule) {
spin_unlock(q->queue_lock);
blk_run_queue_async(q);
} else {
__blk_run_queue(q);
spin_unlock(q->queue_lock);
}
}
static void flush_plug_callbacks(struct blk_plug *plug)
{
LIST_HEAD(callbacks);
if (list_empty(&plug->cb_list))
return;
list_splice_init(&plug->cb_list, &callbacks);
while (!list_empty(&callbacks)) {
struct blk_plug_cb *cb = list_first_entry(&callbacks,
struct blk_plug_cb,
list);
list_del(&cb->list);
cb->callback(cb);
}
}
void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
{
struct request_queue *q;
unsigned long flags;
struct request *rq;
LIST_HEAD(list);
unsigned int depth;
BUG_ON(plug->magic != PLUG_MAGIC);
flush_plug_callbacks(plug);
if (list_empty(&plug->list))
return;
if (plug->should_sort)
list_sort(NULL, &plug->list, plug_rq_cmp);
list_splice_init(&plug->list, &list);
if (plug->should_sort) {
list_sort(NULL, &list, plug_rq_cmp);
plug->should_sort = 0;
}
q = NULL;
depth = 0;
/*
* Save and disable interrupts here, to avoid doing it for every
* queue lock we have to take.
*/
local_irq_save(flags);
while (!list_empty(&plug->list)) {
rq = list_entry_rq(plug->list.next);
while (!list_empty(&list)) {
rq = list_entry_rq(list.next);
list_del_init(&rq->queuelist);
BUG_ON(!(rq->cmd_flags & REQ_ON_PLUG));
BUG_ON(!rq->q);
if (rq->q != q) {
if (q) {
__blk_run_queue(q, false);
spin_unlock(q->queue_lock);
}
/*
* This drops the queue lock
*/
if (q)
queue_unplugged(q, depth, from_schedule);
q = rq->q;
depth = 0;
spin_lock(q->queue_lock);
}
rq->cmd_flags &= ~REQ_ON_PLUG;
@ -2706,39 +2775,29 @@ static void flush_plug_list(struct blk_plug *plug)
__elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH);
else
__elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE);
depth++;
}
if (q) {
__blk_run_queue(q, false);
spin_unlock(q->queue_lock);
}
/*
* This drops the queue lock
*/
if (q)
queue_unplugged(q, depth, from_schedule);
BUG_ON(!list_empty(&plug->list));
local_irq_restore(flags);
}
static void __blk_finish_plug(struct task_struct *tsk, struct blk_plug *plug)
{
flush_plug_list(plug);
if (plug == tsk->plug)
tsk->plug = NULL;
}
EXPORT_SYMBOL(blk_flush_plug_list);
void blk_finish_plug(struct blk_plug *plug)
{
if (plug)
__blk_finish_plug(current, plug);
blk_flush_plug_list(plug, false);
if (plug == current->plug)
current->plug = NULL;
}
EXPORT_SYMBOL(blk_finish_plug);
void __blk_flush_plug(struct task_struct *tsk, struct blk_plug *plug)
{
__blk_finish_plug(tsk, plug);
tsk->plug = plug;
}
EXPORT_SYMBOL(__blk_flush_plug);
int __init blk_dev_init(void)
{
BUILD_BUG_ON(__REQ_NR_BITS > 8 *

View File

@ -55,7 +55,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
WARN_ON(irqs_disabled());
spin_lock_irq(q->queue_lock);
__elv_add_request(q, rq, where);
__blk_run_queue(q, false);
__blk_run_queue(q);
/* the queue is stopped so it won't be plugged+unplugged */
if (rq->cmd_type == REQ_TYPE_PM_RESUME)
q->request_fn(q);

View File

@ -218,7 +218,7 @@ static void flush_end_io(struct request *flush_rq, int error)
* request_fn may confuse the driver. Always use kblockd.
*/
if (queued)
__blk_run_queue(q, true);
blk_run_queue_async(q);
}
/**
@ -274,7 +274,7 @@ static void flush_data_end_io(struct request *rq, int error)
* the comment in flush_end_io().
*/
if (blk_flush_complete_seq(rq, REQ_FSEQ_DATA, error))
__blk_run_queue(q, true);
blk_run_queue_async(q);
}
/**

View File

@ -498,7 +498,6 @@ int blk_register_queue(struct gendisk *disk)
{
int ret;
struct device *dev = disk_to_dev(disk);
struct request_queue *q = disk->queue;
if (WARN_ON(!q))
@ -521,7 +520,7 @@ int blk_register_queue(struct gendisk *disk)
if (ret) {
kobject_uevent(&q->kobj, KOBJ_REMOVE);
kobject_del(&q->kobj);
blk_trace_remove_sysfs(disk_to_dev(disk));
blk_trace_remove_sysfs(dev);
kobject_put(&dev->kobj);
return ret;
}

View File

@ -22,6 +22,7 @@ void blk_rq_timed_out_timer(unsigned long data);
void blk_delete_timer(struct request *);
void blk_add_timer(struct request *);
void __generic_unplug_device(struct request_queue *);
void blk_run_queue_async(struct request_queue *q);
/*
* Internal atomic flags for request handling

View File

@ -3368,7 +3368,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
cfqd->busy_queues > 1) {
cfq_del_timer(cfqd, cfqq);
cfq_clear_cfqq_wait_request(cfqq);
__blk_run_queue(cfqd->queue, false);
__blk_run_queue(cfqd->queue);
} else {
cfq_blkiocg_update_idle_time_stats(
&cfqq->cfqg->blkg);
@ -3383,7 +3383,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
* this new queue is RT and the current one is BE
*/
cfq_preempt_queue(cfqd, cfqq);
__blk_run_queue(cfqd->queue, false);
__blk_run_queue(cfqd->queue);
}
}
@ -3743,7 +3743,7 @@ static void cfq_kick_queue(struct work_struct *work)
struct request_queue *q = cfqd->queue;
spin_lock_irq(q->queue_lock);
__blk_run_queue(cfqd->queue, false);
__blk_run_queue(cfqd->queue);
spin_unlock_irq(q->queue_lock);
}

View File

@ -642,7 +642,7 @@ void elv_quiesce_start(struct request_queue *q)
*/
elv_drain_elevator(q);
while (q->rq.elvpriv) {
__blk_run_queue(q, false);
__blk_run_queue(q);
spin_unlock_irq(q->queue_lock);
msleep(10);
spin_lock_irq(q->queue_lock);
@ -695,7 +695,7 @@ void __elv_add_request(struct request_queue *q, struct request *rq, int where)
* with anything. There's no point in delaying queue
* processing.
*/
__blk_run_queue(q, false);
__blk_run_queue(q);
break;
case ELEVATOR_INSERT_SORT_MERGE:

View File

@ -232,9 +232,17 @@ static int i2c_inb(struct i2c_adapter *i2c_adap)
* Sanity check for the adapter hardware - check the reaction of
* the bus lines only if it seems to be idle.
*/
static int test_bus(struct i2c_algo_bit_data *adap, char *name)
static int test_bus(struct i2c_adapter *i2c_adap)
{
int scl, sda;
struct i2c_algo_bit_data *adap = i2c_adap->algo_data;
const char *name = i2c_adap->name;
int scl, sda, ret;
if (adap->pre_xfer) {
ret = adap->pre_xfer(i2c_adap);
if (ret < 0)
return -ENODEV;
}
if (adap->getscl == NULL)
pr_info("%s: Testing SDA only, SCL is not readable\n", name);
@ -297,11 +305,19 @@ static int test_bus(struct i2c_algo_bit_data *adap, char *name)
"while pulling SCL high!\n", name);
goto bailout;
}
if (adap->post_xfer)
adap->post_xfer(i2c_adap);
pr_info("%s: Test OK\n", name);
return 0;
bailout:
sdahi(adap);
sclhi(adap);
if (adap->post_xfer)
adap->post_xfer(i2c_adap);
return -ENODEV;
}
@ -607,7 +623,7 @@ static int __i2c_bit_add_bus(struct i2c_adapter *adap,
int ret;
if (bit_test) {
ret = test_bus(bit_adap, adap->name);
ret = test_bus(adap);
if (ret < 0)
return -ENODEV;
}

View File

@ -797,7 +797,8 @@ static int i2c_do_add_adapter(struct i2c_driver *driver,
/* Let legacy drivers scan this bus for matching devices */
if (driver->attach_adapter) {
dev_warn(&adap->dev, "attach_adapter method is deprecated\n");
dev_warn(&adap->dev, "%s: attach_adapter method is deprecated\n",
driver->driver.name);
dev_warn(&adap->dev, "Please use another way to instantiate "
"your i2c_client\n");
/* We ignore the return code; if it fails, too bad */
@ -984,7 +985,8 @@ static int i2c_do_del_adapter(struct i2c_driver *driver,
if (!driver->detach_adapter)
return 0;
dev_warn(&adapter->dev, "detach_adapter method is deprecated\n");
dev_warn(&adapter->dev, "%s: detach_adapter method is deprecated\n",
driver->driver.name);
res = driver->detach_adapter(adapter);
if (res)
dev_err(&adapter->dev, "detach_adapter failed (%d) "

View File

@ -39,13 +39,13 @@ struct evdev {
};
struct evdev_client {
int head;
int tail;
unsigned int head;
unsigned int tail;
spinlock_t buffer_lock; /* protects access to buffer, head and tail */
struct fasync_struct *fasync;
struct evdev *evdev;
struct list_head node;
int bufsize;
unsigned int bufsize;
struct input_event buffer[];
};
@ -55,16 +55,25 @@ static DEFINE_MUTEX(evdev_table_mutex);
static void evdev_pass_event(struct evdev_client *client,
struct input_event *event)
{
/*
* Interrupts are disabled, just acquire the lock.
* Make sure we don't leave with the client buffer
* "empty" by having client->head == client->tail.
*/
/* Interrupts are disabled, just acquire the lock. */
spin_lock(&client->buffer_lock);
do {
client->buffer[client->head++] = *event;
client->head &= client->bufsize - 1;
} while (client->head == client->tail);
client->buffer[client->head++] = *event;
client->head &= client->bufsize - 1;
if (unlikely(client->head == client->tail)) {
/*
* This effectively "drops" all unconsumed events, leaving
* EV_SYN/SYN_DROPPED plus the newest event in the queue.
*/
client->tail = (client->head - 2) & (client->bufsize - 1);
client->buffer[client->tail].time = event->time;
client->buffer[client->tail].type = EV_SYN;
client->buffer[client->tail].code = SYN_DROPPED;
client->buffer[client->tail].value = 0;
}
spin_unlock(&client->buffer_lock);
if (event->type == EV_SYN)

View File

@ -1746,6 +1746,42 @@ void input_set_capability(struct input_dev *dev, unsigned int type, unsigned int
}
EXPORT_SYMBOL(input_set_capability);
static unsigned int input_estimate_events_per_packet(struct input_dev *dev)
{
int mt_slots;
int i;
unsigned int events;
if (dev->mtsize) {
mt_slots = dev->mtsize;
} else if (test_bit(ABS_MT_TRACKING_ID, dev->absbit)) {
mt_slots = dev->absinfo[ABS_MT_TRACKING_ID].maximum -
dev->absinfo[ABS_MT_TRACKING_ID].minimum + 1,
clamp(mt_slots, 2, 32);
} else if (test_bit(ABS_MT_POSITION_X, dev->absbit)) {
mt_slots = 2;
} else {
mt_slots = 0;
}
events = mt_slots + 1; /* count SYN_MT_REPORT and SYN_REPORT */
for (i = 0; i < ABS_CNT; i++) {
if (test_bit(i, dev->absbit)) {
if (input_is_mt_axis(i))
events += mt_slots;
else
events++;
}
}
for (i = 0; i < REL_CNT; i++)
if (test_bit(i, dev->relbit))
events++;
return events;
}
#define INPUT_CLEANSE_BITMASK(dev, type, bits) \
do { \
if (!test_bit(EV_##type, dev->evbit)) \
@ -1793,6 +1829,10 @@ int input_register_device(struct input_dev *dev)
/* Make sure that bitmasks not mentioned in dev->evbit are clean. */
input_cleanse_bitmasks(dev);
if (!dev->hint_events_per_packet)
dev->hint_events_per_packet =
input_estimate_events_per_packet(dev);
/*
* If delay and period are pre-set by the driver, then autorepeating
* is handled by the driver itself and we don't do it in input.c.

View File

@ -332,18 +332,20 @@ static int __devinit twl4030_kp_program(struct twl4030_keypad *kp)
static int __devinit twl4030_kp_probe(struct platform_device *pdev)
{
struct twl4030_keypad_data *pdata = pdev->dev.platform_data;
const struct matrix_keymap_data *keymap_data = pdata->keymap_data;
const struct matrix_keymap_data *keymap_data;
struct twl4030_keypad *kp;
struct input_dev *input;
u8 reg;
int error;
if (!pdata || !pdata->rows || !pdata->cols ||
if (!pdata || !pdata->rows || !pdata->cols || !pdata->keymap_data ||
pdata->rows > TWL4030_MAX_ROWS || pdata->cols > TWL4030_MAX_COLS) {
dev_err(&pdev->dev, "Invalid platform_data\n");
return -EINVAL;
}
keymap_data = pdata->keymap_data;
kp = kzalloc(sizeof(*kp), GFP_KERNEL);
input = input_allocate_device();
if (!kp || !input) {

View File

@ -303,7 +303,7 @@ static void xenkbd_backend_changed(struct xenbus_device *dev,
enum xenbus_state backend_state)
{
struct xenkbd_info *info = dev_get_drvdata(&dev->dev);
int val;
int ret, val;
switch (backend_state) {
case XenbusStateInitialising:
@ -316,6 +316,17 @@ static void xenkbd_backend_changed(struct xenbus_device *dev,
case XenbusStateInitWait:
InitWait:
ret = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
"feature-abs-pointer", "%d", &val);
if (ret < 0)
val = 0;
if (val) {
ret = xenbus_printf(XBT_NIL, info->xbdev->nodename,
"request-abs-pointer", "1");
if (ret)
pr_warning("xenkbd: can't request abs-pointer");
}
xenbus_switch_state(dev, XenbusStateConnected);
break;

View File

@ -399,31 +399,34 @@ static int h3600ts_connect(struct serio *serio, struct serio_driver *drv)
IRQF_SHARED | IRQF_DISABLED, "h3600_action", &ts->dev)) {
printk(KERN_ERR "h3600ts.c: Could not allocate Action Button IRQ!\n");
err = -EBUSY;
goto fail2;
goto fail1;
}
if (request_irq(IRQ_GPIO_BITSY_NPOWER_BUTTON, npower_button_handler,
IRQF_SHARED | IRQF_DISABLED, "h3600_suspend", &ts->dev)) {
printk(KERN_ERR "h3600ts.c: Could not allocate Power Button IRQ!\n");
err = -EBUSY;
goto fail3;
goto fail2;
}
serio_set_drvdata(serio, ts);
err = serio_open(serio, drv);
if (err)
return err;
goto fail3;
//h3600_flite_control(1, 25); /* default brightness */
input_register_device(ts->dev);
err = input_register_device(ts->dev);
if (err)
goto fail4;
return 0;
fail3: free_irq(IRQ_GPIO_BITSY_NPOWER_BUTTON, ts->dev);
fail4: serio_close(serio);
fail3: serio_set_drvdata(serio, NULL);
free_irq(IRQ_GPIO_BITSY_NPOWER_BUTTON, ts->dev);
fail2: free_irq(IRQ_GPIO_BITSY_ACTION_BUTTON, ts->dev);
fail1: serio_set_drvdata(serio, NULL);
input_free_device(input_dev);
fail1: input_free_device(input_dev);
kfree(ts);
return err;
}

View File

@ -178,6 +178,10 @@ static int __devinit regulator_led_probe(struct platform_device *pdev)
led->cdev.flags |= LED_CORE_SUSPENDRESUME;
led->vcc = vcc;
/* to handle correctly an already enabled regulator */
if (regulator_is_enabled(led->vcc))
led->enabled = 1;
mutex_init(&led->mutex);
INIT_WORK(&led->work, led_work);

View File

@ -390,13 +390,6 @@ static int raid_is_congested(struct dm_target_callbacks *cb, int bits)
return md_raid5_congested(&rs->md, bits);
}
static void raid_unplug(struct dm_target_callbacks *cb)
{
struct raid_set *rs = container_of(cb, struct raid_set, callbacks);
md_raid5_kick_device(rs->md.private);
}
/*
* Construct a RAID4/5/6 mapping:
* Args:
@ -487,7 +480,6 @@ static int raid_ctr(struct dm_target *ti, unsigned argc, char **argv)
}
rs->callbacks.congested_fn = raid_is_congested;
rs->callbacks.unplug_fn = raid_unplug;
dm_table_add_target_callbacks(ti->table, &rs->callbacks);
return 0;

View File

@ -447,48 +447,59 @@ EXPORT_SYMBOL(md_flush_request);
/* Support for plugging.
* This mirrors the plugging support in request_queue, but does not
* require having a whole queue
* require having a whole queue or request structures.
* We allocate an md_plug_cb for each md device and each thread it gets
* plugged on. This links tot the private plug_handle structure in the
* personality data where we keep a count of the number of outstanding
* plugs so other code can see if a plug is active.
*/
static void plugger_work(struct work_struct *work)
{
struct plug_handle *plug =
container_of(work, struct plug_handle, unplug_work);
plug->unplug_fn(plug);
}
static void plugger_timeout(unsigned long data)
{
struct plug_handle *plug = (void *)data;
kblockd_schedule_work(NULL, &plug->unplug_work);
}
void plugger_init(struct plug_handle *plug,
void (*unplug_fn)(struct plug_handle *))
{
plug->unplug_flag = 0;
plug->unplug_fn = unplug_fn;
init_timer(&plug->unplug_timer);
plug->unplug_timer.function = plugger_timeout;
plug->unplug_timer.data = (unsigned long)plug;
INIT_WORK(&plug->unplug_work, plugger_work);
}
EXPORT_SYMBOL_GPL(plugger_init);
struct md_plug_cb {
struct blk_plug_cb cb;
mddev_t *mddev;
};
void plugger_set_plug(struct plug_handle *plug)
static void plugger_unplug(struct blk_plug_cb *cb)
{
if (!test_and_set_bit(PLUGGED_FLAG, &plug->unplug_flag))
mod_timer(&plug->unplug_timer, jiffies + msecs_to_jiffies(3)+1);
struct md_plug_cb *mdcb = container_of(cb, struct md_plug_cb, cb);
if (atomic_dec_and_test(&mdcb->mddev->plug_cnt))
md_wakeup_thread(mdcb->mddev->thread);
kfree(mdcb);
}
EXPORT_SYMBOL_GPL(plugger_set_plug);
int plugger_remove_plug(struct plug_handle *plug)
/* Check that an unplug wakeup will come shortly.
* If not, wakeup the md thread immediately
*/
int mddev_check_plugged(mddev_t *mddev)
{
if (test_and_clear_bit(PLUGGED_FLAG, &plug->unplug_flag)) {
del_timer(&plug->unplug_timer);
return 1;
} else
struct blk_plug *plug = current->plug;
struct md_plug_cb *mdcb;
if (!plug)
return 0;
}
EXPORT_SYMBOL_GPL(plugger_remove_plug);
list_for_each_entry(mdcb, &plug->cb_list, cb.list) {
if (mdcb->cb.callback == plugger_unplug &&
mdcb->mddev == mddev) {
/* Already on the list, move to top */
if (mdcb != list_first_entry(&plug->cb_list,
struct md_plug_cb,
cb.list))
list_move(&mdcb->cb.list, &plug->cb_list);
return 1;
}
}
/* Not currently on the callback list */
mdcb = kmalloc(sizeof(*mdcb), GFP_ATOMIC);
if (!mdcb)
return 0;
mdcb->mddev = mddev;
mdcb->cb.callback = plugger_unplug;
atomic_inc(&mddev->plug_cnt);
list_add(&mdcb->cb.list, &plug->cb_list);
return 1;
}
EXPORT_SYMBOL_GPL(mddev_check_plugged);
static inline mddev_t *mddev_get(mddev_t *mddev)
{
@ -538,6 +549,7 @@ void mddev_init(mddev_t *mddev)
atomic_set(&mddev->active, 1);
atomic_set(&mddev->openers, 0);
atomic_set(&mddev->active_io, 0);
atomic_set(&mddev->plug_cnt, 0);
spin_lock_init(&mddev->write_lock);
atomic_set(&mddev->flush_pending, 0);
init_waitqueue_head(&mddev->sb_wait);
@ -4723,7 +4735,6 @@ static void md_clean(mddev_t *mddev)
mddev->bitmap_info.chunksize = 0;
mddev->bitmap_info.daemon_sleep = 0;
mddev->bitmap_info.max_write_behind = 0;
mddev->plug = NULL;
}
static void __md_stop_writes(mddev_t *mddev)
@ -6688,12 +6699,6 @@ int md_allow_write(mddev_t *mddev)
}
EXPORT_SYMBOL_GPL(md_allow_write);
void md_unplug(mddev_t *mddev)
{
if (mddev->plug)
mddev->plug->unplug_fn(mddev->plug);
}
#define SYNC_MARKS 10
#define SYNC_MARK_STEP (3*HZ)
void md_do_sync(mddev_t *mddev)

View File

@ -29,26 +29,6 @@
typedef struct mddev_s mddev_t;
typedef struct mdk_rdev_s mdk_rdev_t;
/* generic plugging support - like that provided with request_queue,
* but does not require a request_queue
*/
struct plug_handle {
void (*unplug_fn)(struct plug_handle *);
struct timer_list unplug_timer;
struct work_struct unplug_work;
unsigned long unplug_flag;
};
#define PLUGGED_FLAG 1
void plugger_init(struct plug_handle *plug,
void (*unplug_fn)(struct plug_handle *));
void plugger_set_plug(struct plug_handle *plug);
int plugger_remove_plug(struct plug_handle *plug);
static inline void plugger_flush(struct plug_handle *plug)
{
del_timer_sync(&plug->unplug_timer);
cancel_work_sync(&plug->unplug_work);
}
/*
* MD's 'extended' device
*/
@ -199,6 +179,9 @@ struct mddev_s
int delta_disks, new_level, new_layout;
int new_chunk_sectors;
atomic_t plug_cnt; /* If device is expecting
* more bios soon.
*/
struct mdk_thread_s *thread; /* management thread */
struct mdk_thread_s *sync_thread; /* doing resync or reconstruct */
sector_t curr_resync; /* last block scheduled */
@ -336,7 +319,6 @@ struct mddev_s
struct list_head all_mddevs;
struct attribute_group *to_remove;
struct plug_handle *plug; /* if used by personality */
struct bio_set *bio_set;
@ -516,7 +498,6 @@ extern int md_integrity_register(mddev_t *mddev);
extern void md_integrity_add_rdev(mdk_rdev_t *rdev, mddev_t *mddev);
extern int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale);
extern void restore_bitmap_write_access(struct file *file);
extern void md_unplug(mddev_t *mddev);
extern void mddev_init(mddev_t *mddev);
extern int md_run(mddev_t *mddev);
@ -530,4 +511,5 @@ extern struct bio *bio_clone_mddev(struct bio *bio, gfp_t gfp_mask,
mddev_t *mddev);
extern struct bio *bio_alloc_mddev(gfp_t gfp_mask, int nr_iovecs,
mddev_t *mddev);
extern int mddev_check_plugged(mddev_t *mddev);
#endif /* _MD_MD_H */

View File

@ -565,12 +565,6 @@ static void flush_pending_writes(conf_t *conf)
spin_unlock_irq(&conf->device_lock);
}
static void md_kick_device(mddev_t *mddev)
{
blk_flush_plug(current);
md_wakeup_thread(mddev->thread);
}
/* Barriers....
* Sometimes we need to suspend IO while we do something else,
* either some resync/recovery, or reconfigure the array.
@ -600,7 +594,7 @@ static void raise_barrier(conf_t *conf)
/* Wait until no block IO is waiting */
wait_event_lock_irq(conf->wait_barrier, !conf->nr_waiting,
conf->resync_lock, md_kick_device(conf->mddev));
conf->resync_lock, );
/* block any new IO from starting */
conf->barrier++;
@ -608,7 +602,7 @@ static void raise_barrier(conf_t *conf)
/* Now wait for all pending IO to complete */
wait_event_lock_irq(conf->wait_barrier,
!conf->nr_pending && conf->barrier < RESYNC_DEPTH,
conf->resync_lock, md_kick_device(conf->mddev));
conf->resync_lock, );
spin_unlock_irq(&conf->resync_lock);
}
@ -630,7 +624,7 @@ static void wait_barrier(conf_t *conf)
conf->nr_waiting++;
wait_event_lock_irq(conf->wait_barrier, !conf->barrier,
conf->resync_lock,
md_kick_device(conf->mddev));
);
conf->nr_waiting--;
}
conf->nr_pending++;
@ -666,8 +660,7 @@ static void freeze_array(conf_t *conf)
wait_event_lock_irq(conf->wait_barrier,
conf->nr_pending == conf->nr_queued+1,
conf->resync_lock,
({ flush_pending_writes(conf);
md_kick_device(conf->mddev); }));
flush_pending_writes(conf));
spin_unlock_irq(&conf->resync_lock);
}
static void unfreeze_array(conf_t *conf)
@ -729,6 +722,7 @@ static int make_request(mddev_t *mddev, struct bio * bio)
const unsigned long do_sync = (bio->bi_rw & REQ_SYNC);
const unsigned long do_flush_fua = (bio->bi_rw & (REQ_FLUSH | REQ_FUA));
mdk_rdev_t *blocked_rdev;
int plugged;
/*
* Register the new request and wait if the reconstruction
@ -820,6 +814,8 @@ static int make_request(mddev_t *mddev, struct bio * bio)
* inc refcount on their rdev. Record them by setting
* bios[x] to bio
*/
plugged = mddev_check_plugged(mddev);
disks = conf->raid_disks;
retry_write:
blocked_rdev = NULL;
@ -925,7 +921,7 @@ static int make_request(mddev_t *mddev, struct bio * bio)
/* In case raid1d snuck in to freeze_array */
wake_up(&conf->wait_barrier);
if (do_sync || !bitmap)
if (do_sync || !bitmap || !plugged)
md_wakeup_thread(mddev->thread);
return 0;
@ -1516,13 +1512,16 @@ static void raid1d(mddev_t *mddev)
conf_t *conf = mddev->private;
struct list_head *head = &conf->retry_list;
mdk_rdev_t *rdev;
struct blk_plug plug;
md_check_recovery(mddev);
blk_start_plug(&plug);
for (;;) {
char b[BDEVNAME_SIZE];
flush_pending_writes(conf);
if (atomic_read(&mddev->plug_cnt) == 0)
flush_pending_writes(conf);
spin_lock_irqsave(&conf->device_lock, flags);
if (list_empty(head)) {
@ -1593,6 +1592,7 @@ static void raid1d(mddev_t *mddev)
}
cond_resched();
}
blk_finish_plug(&plug);
}
@ -2039,7 +2039,6 @@ static int stop(mddev_t *mddev)
md_unregister_thread(mddev->thread);
mddev->thread = NULL;
blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/
if (conf->r1bio_pool)
mempool_destroy(conf->r1bio_pool);
kfree(conf->mirrors);

View File

@ -634,12 +634,6 @@ static void flush_pending_writes(conf_t *conf)
spin_unlock_irq(&conf->device_lock);
}
static void md_kick_device(mddev_t *mddev)
{
blk_flush_plug(current);
md_wakeup_thread(mddev->thread);
}
/* Barriers....
* Sometimes we need to suspend IO while we do something else,
* either some resync/recovery, or reconfigure the array.
@ -669,15 +663,15 @@ static void raise_barrier(conf_t *conf, int force)
/* Wait until no block IO is waiting (unless 'force') */
wait_event_lock_irq(conf->wait_barrier, force || !conf->nr_waiting,
conf->resync_lock, md_kick_device(conf->mddev));
conf->resync_lock, );
/* block any new IO from starting */
conf->barrier++;
/* No wait for all pending IO to complete */
/* Now wait for all pending IO to complete */
wait_event_lock_irq(conf->wait_barrier,
!conf->nr_pending && conf->barrier < RESYNC_DEPTH,
conf->resync_lock, md_kick_device(conf->mddev));
conf->resync_lock, );
spin_unlock_irq(&conf->resync_lock);
}
@ -698,7 +692,7 @@ static void wait_barrier(conf_t *conf)
conf->nr_waiting++;
wait_event_lock_irq(conf->wait_barrier, !conf->barrier,
conf->resync_lock,
md_kick_device(conf->mddev));
);
conf->nr_waiting--;
}
conf->nr_pending++;
@ -734,8 +728,8 @@ static void freeze_array(conf_t *conf)
wait_event_lock_irq(conf->wait_barrier,
conf->nr_pending == conf->nr_queued+1,
conf->resync_lock,
({ flush_pending_writes(conf);
md_kick_device(conf->mddev); }));
flush_pending_writes(conf));
spin_unlock_irq(&conf->resync_lock);
}
@ -762,6 +756,7 @@ static int make_request(mddev_t *mddev, struct bio * bio)
const unsigned long do_fua = (bio->bi_rw & REQ_FUA);
unsigned long flags;
mdk_rdev_t *blocked_rdev;
int plugged;
if (unlikely(bio->bi_rw & REQ_FLUSH)) {
md_flush_request(mddev, bio);
@ -870,6 +865,8 @@ static int make_request(mddev_t *mddev, struct bio * bio)
* inc refcount on their rdev. Record them by setting
* bios[x] to bio
*/
plugged = mddev_check_plugged(mddev);
raid10_find_phys(conf, r10_bio);
retry_write:
blocked_rdev = NULL;
@ -946,9 +943,8 @@ static int make_request(mddev_t *mddev, struct bio * bio)
/* In case raid10d snuck in to freeze_array */
wake_up(&conf->wait_barrier);
if (do_sync || !mddev->bitmap)
if (do_sync || !mddev->bitmap || !plugged)
md_wakeup_thread(mddev->thread);
return 0;
}
@ -1640,9 +1636,11 @@ static void raid10d(mddev_t *mddev)
conf_t *conf = mddev->private;
struct list_head *head = &conf->retry_list;
mdk_rdev_t *rdev;
struct blk_plug plug;
md_check_recovery(mddev);
blk_start_plug(&plug);
for (;;) {
char b[BDEVNAME_SIZE];
@ -1716,6 +1714,7 @@ static void raid10d(mddev_t *mddev)
}
cond_resched();
}
blk_finish_plug(&plug);
}

View File

@ -27,12 +27,12 @@
*
* We group bitmap updates into batches. Each batch has a number.
* We may write out several batches at once, but that isn't very important.
* conf->bm_write is the number of the last batch successfully written.
* conf->bm_flush is the number of the last batch that was closed to
* conf->seq_write is the number of the last batch successfully written.
* conf->seq_flush is the number of the last batch that was closed to
* new additions.
* When we discover that we will need to write to any block in a stripe
* (in add_stripe_bio) we update the in-memory bitmap and record in sh->bm_seq
* the number of the batch it will be in. This is bm_flush+1.
* the number of the batch it will be in. This is seq_flush+1.
* When we are ready to do a write, if that batch hasn't been written yet,
* we plug the array and queue the stripe for later.
* When an unplug happens, we increment bm_flush, thus closing the current
@ -199,14 +199,12 @@ static void __release_stripe(raid5_conf_t *conf, struct stripe_head *sh)
BUG_ON(!list_empty(&sh->lru));
BUG_ON(atomic_read(&conf->active_stripes)==0);
if (test_bit(STRIPE_HANDLE, &sh->state)) {
if (test_bit(STRIPE_DELAYED, &sh->state)) {
if (test_bit(STRIPE_DELAYED, &sh->state))
list_add_tail(&sh->lru, &conf->delayed_list);
plugger_set_plug(&conf->plug);
} else if (test_bit(STRIPE_BIT_DELAY, &sh->state) &&
sh->bm_seq - conf->seq_write > 0) {
else if (test_bit(STRIPE_BIT_DELAY, &sh->state) &&
sh->bm_seq - conf->seq_write > 0)
list_add_tail(&sh->lru, &conf->bitmap_list);
plugger_set_plug(&conf->plug);
} else {
else {
clear_bit(STRIPE_BIT_DELAY, &sh->state);
list_add_tail(&sh->lru, &conf->handle_list);
}
@ -461,7 +459,7 @@ get_active_stripe(raid5_conf_t *conf, sector_t sector,
< (conf->max_nr_stripes *3/4)
|| !conf->inactive_blocked),
conf->device_lock,
md_raid5_kick_device(conf));
);
conf->inactive_blocked = 0;
} else
init_stripe(sh, sector, previous);
@ -1470,7 +1468,7 @@ static int resize_stripes(raid5_conf_t *conf, int newsize)
wait_event_lock_irq(conf->wait_for_stripe,
!list_empty(&conf->inactive_list),
conf->device_lock,
blk_flush_plug(current));
);
osh = get_free_stripe(conf);
spin_unlock_irq(&conf->device_lock);
atomic_set(&nsh->count, 1);
@ -3623,8 +3621,7 @@ static void raid5_activate_delayed(raid5_conf_t *conf)
atomic_inc(&conf->preread_active_stripes);
list_add_tail(&sh->lru, &conf->hold_list);
}
} else
plugger_set_plug(&conf->plug);
}
}
static void activate_bit_delay(raid5_conf_t *conf)
@ -3641,21 +3638,6 @@ static void activate_bit_delay(raid5_conf_t *conf)
}
}
void md_raid5_kick_device(raid5_conf_t *conf)
{
blk_flush_plug(current);
raid5_activate_delayed(conf);
md_wakeup_thread(conf->mddev->thread);
}
EXPORT_SYMBOL_GPL(md_raid5_kick_device);
static void raid5_unplug(struct plug_handle *plug)
{
raid5_conf_t *conf = container_of(plug, raid5_conf_t, plug);
md_raid5_kick_device(conf);
}
int md_raid5_congested(mddev_t *mddev, int bits)
{
raid5_conf_t *conf = mddev->private;
@ -3945,6 +3927,7 @@ static int make_request(mddev_t *mddev, struct bio * bi)
struct stripe_head *sh;
const int rw = bio_data_dir(bi);
int remaining;
int plugged;
if (unlikely(bi->bi_rw & REQ_FLUSH)) {
md_flush_request(mddev, bi);
@ -3963,6 +3946,7 @@ static int make_request(mddev_t *mddev, struct bio * bi)
bi->bi_next = NULL;
bi->bi_phys_segments = 1; /* over-loaded to count active stripes */
plugged = mddev_check_plugged(mddev);
for (;logical_sector < last_sector; logical_sector += STRIPE_SECTORS) {
DEFINE_WAIT(w);
int disks, data_disks;
@ -4057,7 +4041,7 @@ static int make_request(mddev_t *mddev, struct bio * bi)
* add failed due to overlap. Flush everything
* and wait a while
*/
md_raid5_kick_device(conf);
md_wakeup_thread(mddev->thread);
release_stripe(sh);
schedule();
goto retry;
@ -4077,6 +4061,9 @@ static int make_request(mddev_t *mddev, struct bio * bi)
}
}
if (!plugged)
md_wakeup_thread(mddev->thread);
spin_lock_irq(&conf->device_lock);
remaining = raid5_dec_bi_phys_segments(bi);
spin_unlock_irq(&conf->device_lock);
@ -4478,24 +4465,30 @@ static void raid5d(mddev_t *mddev)
struct stripe_head *sh;
raid5_conf_t *conf = mddev->private;
int handled;
struct blk_plug plug;
pr_debug("+++ raid5d active\n");
md_check_recovery(mddev);
blk_start_plug(&plug);
handled = 0;
spin_lock_irq(&conf->device_lock);
while (1) {
struct bio *bio;
if (conf->seq_flush != conf->seq_write) {
int seq = conf->seq_flush;
if (atomic_read(&mddev->plug_cnt) == 0 &&
!list_empty(&conf->bitmap_list)) {
/* Now is a good time to flush some bitmap updates */
conf->seq_flush++;
spin_unlock_irq(&conf->device_lock);
bitmap_unplug(mddev->bitmap);
spin_lock_irq(&conf->device_lock);
conf->seq_write = seq;
conf->seq_write = conf->seq_flush;
activate_bit_delay(conf);
}
if (atomic_read(&mddev->plug_cnt) == 0)
raid5_activate_delayed(conf);
while ((bio = remove_bio_from_retry(conf))) {
int ok;
@ -4525,6 +4518,7 @@ static void raid5d(mddev_t *mddev)
spin_unlock_irq(&conf->device_lock);
async_tx_issue_pending_all();
blk_finish_plug(&plug);
pr_debug("--- raid5d inactive\n");
}
@ -5141,8 +5135,6 @@ static int run(mddev_t *mddev)
mdname(mddev));
md_set_array_sectors(mddev, raid5_size(mddev, 0, 0));
plugger_init(&conf->plug, raid5_unplug);
mddev->plug = &conf->plug;
if (mddev->queue) {
int chunk_size;
/* read-ahead size must cover two whole stripes, which
@ -5192,7 +5184,6 @@ static int stop(mddev_t *mddev)
mddev->thread = NULL;
if (mddev->queue)
mddev->queue->backing_dev_info.congested_fn = NULL;
plugger_flush(&conf->plug); /* the unplug fn references 'conf'*/
free_conf(conf);
mddev->private = NULL;
mddev->to_remove = &raid5_attrs_group;

View File

@ -400,8 +400,6 @@ struct raid5_private_data {
* Cleared when a sync completes.
*/
struct plug_handle plug;
/* per cpu variables */
struct raid5_percpu {
struct page *spare_page; /* Used when checking P/Q in raid6 */

View File

@ -348,15 +348,15 @@ static unsigned long gru_chiplet_cpu_to_mmr(int chiplet, int cpu, int *corep)
static int gru_irq_count[GRU_CHIPLETS_PER_BLADE];
static void gru_noop(unsigned int irq)
static void gru_noop(struct irq_data *d)
{
}
static struct irq_chip gru_chip[GRU_CHIPLETS_PER_BLADE] = {
[0 ... GRU_CHIPLETS_PER_BLADE - 1] {
.mask = gru_noop,
.unmask = gru_noop,
.ack = gru_noop
.irq_mask = gru_noop,
.irq_unmask = gru_noop,
.irq_ack = gru_noop
}
};

View File

@ -25,6 +25,8 @@
#include <mach/balloon3.h>
#include <asm/mach-types.h>
#include "soc_common.h"
/*
@ -127,6 +129,9 @@ static int __init balloon3_pcmcia_init(void)
{
int ret;
if (!machine_is_balloon3())
return -ENODEV;
balloon3_pcmcia_device = platform_device_alloc("pxa2xx-pcmcia", -1);
if (!balloon3_pcmcia_device)
return -ENOMEM;

View File

@ -69,15 +69,15 @@ static int trizeps_pcmcia_hw_init(struct soc_pcmcia_socket *skt)
for (i = 0; i < ARRAY_SIZE(irqs); i++) {
if (irqs[i].sock != skt->nr)
continue;
if (gpio_request(IRQ_TO_GPIO(irqs[i].irq), irqs[i].str) < 0) {
if (gpio_request(irq_to_gpio(irqs[i].irq), irqs[i].str) < 0) {
pr_err("%s: sock %d unable to request gpio %d\n",
__func__, skt->nr, IRQ_TO_GPIO(irqs[i].irq));
__func__, skt->nr, irq_to_gpio(irqs[i].irq));
ret = -EBUSY;
goto error;
}
if (gpio_direction_input(IRQ_TO_GPIO(irqs[i].irq)) < 0) {
if (gpio_direction_input(irq_to_gpio(irqs[i].irq)) < 0) {
pr_err("%s: sock %d unable to set input gpio %d\n",
__func__, skt->nr, IRQ_TO_GPIO(irqs[i].irq));
__func__, skt->nr, irq_to_gpio(irqs[i].irq));
ret = -EINVAL;
goto error;
}
@ -86,7 +86,7 @@ static int trizeps_pcmcia_hw_init(struct soc_pcmcia_socket *skt)
error:
for (; i >= 0; i--) {
gpio_free(IRQ_TO_GPIO(irqs[i].irq));
gpio_free(irq_to_gpio(irqs[i].irq));
}
return (ret);
}
@ -97,7 +97,7 @@ static void trizeps_pcmcia_hw_shutdown(struct soc_pcmcia_socket *skt)
/* free allocated gpio's */
gpio_free(GPIO_PRDY);
for (i = 0; i < ARRAY_SIZE(irqs); i++)
gpio_free(IRQ_TO_GPIO(irqs[i].irq));
gpio_free(irq_to_gpio(irqs[i].irq));
}
static unsigned long trizeps_pcmcia_status[2];
@ -226,6 +226,9 @@ static int __init trizeps_pcmcia_init(void)
{
int ret;
if (!machine_is_trizeps4() && !machine_is_trizeps4wl())
return -ENODEV;
trizeps_pcmcia_device = platform_device_alloc("pxa2xx-pcmcia", -1);
if (!trizeps_pcmcia_device)
return -ENOMEM;

View File

@ -1171,16 +1171,17 @@ static int rio_hdid_setup(char *str)
__setup("riohdid=", rio_hdid_setup);
void rio_register_mport(struct rio_mport *port)
int rio_register_mport(struct rio_mport *port)
{
if (next_portid >= RIO_MAX_MPORTS) {
pr_err("RIO: reached specified max number of mports\n");
return;
return 1;
}
port->id = next_portid++;
port->host_deviceid = rio_get_hdid(port->id);
list_add_tail(&port->node, &rio_mports);
return 0;
}
EXPORT_SYMBOL_GPL(rio_local_get_device_id);

View File

@ -418,3 +418,4 @@ DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTCPS1848, idtg2_switch_init);
DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTCPS1616, idtg2_switch_init);
DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTVPS1616, idtg2_switch_init);
DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTSPS1616, idtg2_switch_init);
DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTCPS1432, idtg2_switch_init);

View File

@ -171,7 +171,7 @@ struct rtc_device *rtc_device_register(const char *name, struct device *dev,
err = __rtc_read_alarm(rtc, &alrm);
if (!err && !rtc_valid_tm(&alrm.time))
rtc_set_alarm(rtc, &alrm);
rtc_initialize_alarm(rtc, &alrm);
strlcpy(rtc->name, name, RTC_DEVICE_NAME_SIZE);
dev_set_name(&rtc->dev, "rtc%d", id);

View File

@ -375,6 +375,32 @@ int rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
}
EXPORT_SYMBOL_GPL(rtc_set_alarm);
/* Called once per device from rtc_device_register */
int rtc_initialize_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
{
int err;
err = rtc_valid_tm(&alarm->time);
if (err != 0)
return err;
err = mutex_lock_interruptible(&rtc->ops_lock);
if (err)
return err;
rtc->aie_timer.node.expires = rtc_tm_to_ktime(alarm->time);
rtc->aie_timer.period = ktime_set(0, 0);
if (alarm->enabled) {
rtc->aie_timer.enabled = 1;
timerqueue_add(&rtc->timerqueue, &rtc->aie_timer.node);
}
mutex_unlock(&rtc->ops_lock);
return err;
}
EXPORT_SYMBOL_GPL(rtc_initialize_alarm);
int rtc_alarm_irq_enable(struct rtc_device *rtc, unsigned int enabled)
{
int err = mutex_lock_interruptible(&rtc->ops_lock);

View File

@ -250,6 +250,8 @@ static int bfin_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
bfin_rtc_int_set_alarm(rtc);
else
bfin_rtc_int_clear(~(RTC_ISTAT_ALARM | RTC_ISTAT_ALARM_DAY));
return 0;
}
static int bfin_rtc_read_time(struct device *dev, struct rtc_time *tm)

View File

@ -401,6 +401,7 @@ const struct platform_device_id mc13xxx_rtc_idtable[] = {
}, {
.name = "mc13892-rtc",
},
{ }
};
static struct platform_driver mc13xxx_rtc_driver = {

View File

@ -336,7 +336,6 @@ static void s3c_rtc_release(struct device *dev)
/* do not clear AIE here, it may be needed for wake */
s3c_rtc_setpie(dev, 0);
free_irq(s3c_rtc_alarmno, rtc_dev);
free_irq(s3c_rtc_tickno, rtc_dev);
}
@ -408,7 +407,6 @@ static int __devexit s3c_rtc_remove(struct platform_device *dev)
platform_set_drvdata(dev, NULL);
rtc_device_unregister(rtc);
s3c_rtc_setpie(&dev->dev, 0);
s3c_rtc_setaie(&dev->dev, 0);
clk_disable(rtc_clk);

View File

@ -443,7 +443,7 @@ static void scsi_run_queue(struct request_queue *q)
&sdev->request_queue->queue_flags);
if (flagset)
queue_flag_set(QUEUE_FLAG_REENTER, sdev->request_queue);
__blk_run_queue(sdev->request_queue, false);
__blk_run_queue(sdev->request_queue);
if (flagset)
queue_flag_clear(QUEUE_FLAG_REENTER, sdev->request_queue);
spin_unlock(sdev->request_queue->queue_lock);

Some files were not shown because too many files have changed in this diff Show More