1
0
Fork 0

Merge ../powerpc-merge

hifive-unleashed-5.1
Paul Mackerras 2006-02-24 14:05:47 +11:00
commit a00428f5b1
563 changed files with 7800 additions and 6850 deletions

View File

@ -11,6 +11,8 @@
Joel Schopp <jschopp@austin.ibm.com>
ia64/x86_64:
Ashok Raj <ashok.raj@intel.com>
s390:
Heiko Carstens <heiko.carstens@de.ibm.com>
Authors: Ashok Raj <ashok.raj@intel.com>
Lots of feedback: Nathan Lynch <nathanl@austin.ibm.com>,
@ -44,9 +46,28 @@ maxcpus=n Restrict boot time cpus to n. Say if you have 4 cpus, using
maxcpus=2 will only boot 2. You can choose to bring the
other cpus later online, read FAQ's for more info.
additional_cpus=n [x86_64 only] use this to limit hotpluggable cpus.
This option sets
cpu_possible_map = cpu_present_map + additional_cpus
additional_cpus*=n Use this to limit hotpluggable cpus. This option sets
cpu_possible_map = cpu_present_map + additional_cpus
(*) Option valid only for following architectures
- x86_64, ia64, s390
ia64 and x86_64 use the number of disabled local apics in ACPI tables MADT
to determine the number of potentially hot-pluggable cpus. The implementation
should only rely on this to count the #of cpus, but *MUST* not rely on the
apicid values in those tables for disabled apics. In the event BIOS doesnt
mark such hot-pluggable cpus as disabled entries, one could use this
parameter "additional_cpus=x" to represent those cpus in the cpu_possible_map.
s390 uses the number of cpus it detects at IPL time to also the number of bits
in cpu_possible_map. If it is desired to add additional cpus at a later time
the number should be specified using this option or the possible_cpus option.
possible_cpus=n [s390 only] use this to set hotpluggable cpus.
This option sets possible_cpus bits in
cpu_possible_map. Thus keeping the numbers of bits set
constant even if the machine gets rebooted.
This option overrides additional_cpus.
CPU maps and such
-----------------

View File

@ -171,3 +171,12 @@ Why: The ISA interface is faster and should be always available. The I2C
probing is also known to cause trouble in at least one case (see
bug #5889.)
Who: Jean Delvare <khali@linux-fr.org>
---------------------------
What: mount/umount uevents
When: February 2007
Why: These events are not correct, and do not properly let userspace know
when a file system has been mounted or unmounted. Userspace should
poll the /proc/mounts file instead to detect this properly.
Who: Greg Kroah-Hartman <gregkh@suse.de>

View File

@ -79,15 +79,18 @@ that instance in a system with many cpus making intensive use of it.
tmpfs has a mount option to set the NUMA memory allocation policy for
all files in that instance:
mpol=interleave prefers to allocate memory from each node in turn
mpol=default prefers to allocate memory from the local node
mpol=bind prefers to allocate from mpol_nodelist
mpol=preferred prefers to allocate from first node in mpol_nodelist
all files in that instance (if CONFIG_NUMA is enabled) - which can be
adjusted on the fly via 'mount -o remount ...'
The following mount option is used in conjunction with mpol=interleave,
mpol=bind or mpol=preferred:
mpol_nodelist: nodelist suitable for parsing with nodelist_parse.
mpol=default prefers to allocate memory from the local node
mpol=prefer:Node prefers to allocate memory from the given Node
mpol=bind:NodeList allocates memory only from nodes in NodeList
mpol=interleave prefers to allocate from each node in turn
mpol=interleave:NodeList allocates from each node of NodeList in turn
NodeList format is a comma-separated list of decimal numbers and ranges,
a range being two hyphen-separated decimal numbers, the smallest and
largest node numbers in the range. For example, mpol=bind:0-3,5,7,9-15
To specify the initial root directory you can use the following mount
@ -109,4 +112,4 @@ RAM/SWAP in 10240 inodes and it is only accessible by root.
Author:
Christoph Rohland <cr@sap.com>, 1.12.01
Updated:
Hugh Dickins <hugh@veritas.com>, 13 March 2005
Hugh Dickins <hugh@veritas.com>, 19 February 2006

View File

@ -57,8 +57,6 @@ OPTIONS
port=n port to connect to on the remote server
timeout=n request timeouts (in ms) (default 60000ms)
noextend force legacy mode (no 9P2000.u semantics)
uid attempt to mount as a particular uid
@ -74,10 +72,16 @@ OPTIONS
RESOURCES
=========
The Linux version of the 9P server, along with some client-side utilities
can be found at http://v9fs.sf.net (along with a CVS repository of the
development branch of this module). There are user and developer mailing
lists here, as well as a bug-tracker.
The Linux version of the 9P server is now maintained under the npfs project
on sourceforge (http://sourceforge.net/projects/npfs).
There are user and developer mailing lists available through the v9fs project
on sourceforge (http://sourceforge.net/projects/v9fs).
News and other information is maintained on SWiK (http://swik.net/v9fs).
Bug reports may be issued through the kernel.org bugzilla
(http://bugzilla.kernel.org)
For more information on the Plan 9 Operating System check out
http://plan9.bell-labs.com/plan9

View File

@ -0,0 +1,234 @@
=================================
INTERNAL KERNEL ABI FOR FR-V ARCH
=================================
The internal FRV kernel ABI is not quite the same as the userspace ABI. A number of the registers
are used for special purposed, and the ABI is not consistent between modules vs core, and MMU vs
no-MMU.
This partly stems from the fact that FRV CPUs do not have a separate supervisor stack pointer, and
most of them do not have any scratch registers, thus requiring at least one general purpose
register to be clobbered in such an event. Also, within the kernel core, it is possible to simply
jump or call directly between functions using a relative offset. This cannot be extended to modules
for the displacement is likely to be too far. Thus in modules the address of a function to call
must be calculated in a register and then used, requiring two extra instructions.
This document has the following sections:
(*) System call register ABI
(*) CPU operating modes
(*) Internal kernel-mode register ABI
(*) Internal debug-mode register ABI
(*) Virtual interrupt handling
========================
SYSTEM CALL REGISTER ABI
========================
When a system call is made, the following registers are effective:
REGISTERS CALL RETURN
=============== ======================= =======================
GR7 System call number Preserved
GR8 Syscall arg #1 Return value
GR9-GR13 Syscall arg #2-6 Preserved
===================
CPU OPERATING MODES
===================
The FR-V CPU has three basic operating modes. In order of increasing capability:
(1) User mode.
Basic userspace running mode.
(2) Kernel mode.
Normal kernel mode. There are many additional control registers available that may be
accessed in this mode, in addition to all the stuff available to user mode. This has two
submodes:
(a) Exceptions enabled (PSR.T == 1).
Exceptions will invoke the appropriate normal kernel mode handler. On entry to the
handler, the PSR.T bit will be cleared.
(b) Exceptions disabled (PSR.T == 0).
No exceptions or interrupts may happen. Any mandatory exceptions will cause the CPU to
halt unless the CPU is told to jump into debug mode instead.
(3) Debug mode.
No exceptions may happen in this mode. Memory protection and management exceptions will be
flagged for later consideration, but the exception handler won't be invoked. Debugging traps
such as hardware breakpoints and watchpoints will be ignored. This mode is entered only by
debugging events obtained from the other two modes.
All kernel mode registers may be accessed, plus a few extra debugging specific registers.
=================================
INTERNAL KERNEL-MODE REGISTER ABI
=================================
There are a number of permanent register assignments that are set up by entry.S in the exception
prologue. Note that there is a complete set of exception prologues for each of user->kernel
transition and kernel->kernel transition. There are also user->debug and kernel->debug mode
transition prologues.
REGISTER FLAVOUR USE
=============== ======= ====================================================
GR1 Supervisor stack pointer
GR15 Current thread info pointer
GR16 GP-Rel base register for small data
GR28 Current exception frame pointer (__frame)
GR29 Current task pointer (current)
GR30 Destroyed by kernel mode entry
GR31 NOMMU Destroyed by debug mode entry
GR31 MMU Destroyed by TLB miss kernel mode entry
CCR.ICC2 Virtual interrupt disablement tracking
CCCR.CC3 Cleared by exception prologue (atomic op emulation)
SCR0 MMU See mmu-layout.txt.
SCR1 MMU See mmu-layout.txt.
SCR2 MMU Save for EAR0 (destroyed by icache insns in debug mode)
SCR3 MMU Save for GR31 during debug exceptions
DAMR/IAMR NOMMU Fixed memory protection layout.
DAMR/IAMR MMU See mmu-layout.txt.
Certain registers are also used or modified across function calls:
REGISTER CALL RETURN
=============== =============================== ===============================
GR0 Fixed Zero -
GR2 Function call frame pointer
GR3 Special Preserved
GR3-GR7 - Clobbered
GR8 Function call arg #1 Return value (or clobbered)
GR9 Function call arg #2 Return value MSW (or clobbered)
GR10-GR13 Function call arg #3-#6 Clobbered
GR14 - Clobbered
GR15-GR16 Special Preserved
GR17-GR27 - Preserved
GR28-GR31 Special Only accessed explicitly
LR Return address after CALL Clobbered
CCR/CCCR - Mostly Clobbered
================================
INTERNAL DEBUG-MODE REGISTER ABI
================================
This is the same as the kernel-mode register ABI for functions calls. The difference is that in
debug-mode there's a different stack and a different exception frame. Almost all the global
registers from kernel-mode (including the stack pointer) may be changed.
REGISTER FLAVOUR USE
=============== ======= ====================================================
GR1 Debug stack pointer
GR16 GP-Rel base register for small data
GR31 Current debug exception frame pointer (__debug_frame)
SCR3 MMU Saved value of GR31
Note that debug mode is able to interfere with the kernel's emulated atomic ops, so it must be
exceedingly careful not to do any that would interact with the main kernel in this regard. Hence
the debug mode code (gdbstub) is almost completely self-contained. The only external code used is
the sprintf family of functions.
Futhermore, break.S is so complicated because single-step mode does not switch off on entry to an
exception. That means unless manually disabled, single-stepping will blithely go on stepping into
things like interrupts. See gdbstub.txt for more information.
==========================
VIRTUAL INTERRUPT HANDLING
==========================
Because accesses to the PSR is so slow, and to disable interrupts we have to access it twice (once
to read and once to write), we don't actually disable interrupts at all if we don't have to. What
we do instead is use the ICC2 condition code flags to note virtual disablement, such that if we
then do take an interrupt, we note the flag, really disable interrupts, set another flag and resume
execution at the point the interrupt happened. Setting condition flags as a side effect of an
arithmetic or logical instruction is really fast. This use of the ICC2 only occurs within the
kernel - it does not affect userspace.
The flags we use are:
(*) CCR.ICC2.Z [Zero flag]
Set to virtually disable interrupts, clear when interrupts are virtually enabled. Can be
modified by logical instructions without affecting the Carry flag.
(*) CCR.ICC2.C [Carry flag]
Clear to indicate hardware interrupts are really disabled, set otherwise.
What happens is this:
(1) Normal kernel-mode operation.
ICC2.Z is 0, ICC2.C is 1.
(2) An interrupt occurs. The exception prologue examines ICC2.Z and determines that nothing needs
doing. This is done simply with an unlikely BEQ instruction.
(3) The interrupts are disabled (local_irq_disable)
ICC2.Z is set to 1.
(4) If interrupts were then re-enabled (local_irq_enable):
ICC2.Z would be set to 0.
A TIHI #2 instruction (trap #2 if condition HI - Z==0 && C==0) would be used to trap if
interrupts were now virtually enabled, but physically disabled - which they're not, so the
trap isn't taken. The kernel would then be back to state (1).
(5) An interrupt occurs. The exception prologue examines ICC2.Z and determines that the interrupt
shouldn't actually have happened. It jumps aside, and there disabled interrupts by setting
PSR.PIL to 14 and then it clears ICC2.C.
(6) If interrupts were then saved and disabled again (local_irq_save):
ICC2.Z would be shifted into the save variable and masked off (giving a 1).
ICC2.Z would then be set to 1 (thus unchanged), and ICC2.C would be unaffected (ie: 0).
(7) If interrupts were then restored from state (6) (local_irq_restore):
ICC2.Z would be set to indicate the result of XOR'ing the saved value (ie: 1) with 1, which
gives a result of 0 - thus leaving ICC2.Z set.
ICC2.C would remain unaffected (ie: 0).
A TIHI #2 instruction would be used to again assay the current state, but this would do
nothing as Z==1.
(8) If interrupts were then enabled (local_irq_enable):
ICC2.Z would be cleared. ICC2.C would be left unaffected. Both flags would now be 0.
A TIHI #2 instruction again issued to assay the current state would then trap as both Z==0
[interrupts virtually enabled] and C==0 [interrupts really disabled] would then be true.
(9) The trap #2 handler would simply enable hardware interrupts (set PSR.PIL to 0), set ICC2.C to
1 and return.
(10) Immediately upon returning, the pending interrupt would be taken.
(11) The interrupt handler would take the path of actually processing the interrupt (ICC2.Z is
clear, BEQ fails as per step (2)).
(12) The interrupt handler would then set ICC2.C to 1 since hardware interrupts are definitely
enabled - or else the kernel wouldn't be here.
(13) On return from the interrupt handler, things would be back to state (1).
This trap (#2) is only available in kernel mode. In user mode it will result in SIGILL.

View File

@ -36,6 +36,10 @@ Module Parameters
(default is 1)
Use 'init=0' to bypass initializing the chip.
Try this if your computer crashes when you load the module.
* reset: int
(default is 0)
The driver used to reset the chip on load, but does no more. Use
'reset=1' to restore the old behavior. Report if you need to do this.
Description
-----------

View File

@ -1133,6 +1133,8 @@ running once the system is up.
Mechanism 1.
conf2 [IA-32] Force use of PCI Configuration
Mechanism 2.
nommconf [IA-32,X86_64] Disable use of MMCONFIG for PCI
Configuration
nosort [IA-32] Don't sort PCI devices according to
order given by the PCI BIOS. This sorting is
done to get a device order compatible with
@ -1636,6 +1638,9 @@ running once the system is up.
Format:
<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
norandmaps Don't use address space randomization
Equivalent to echo 0 > /proc/sys/kernel/randomize_va_space
______________________________________________________________________
Changelog:

View File

@ -136,17 +136,20 @@ Kprobes, jprobes, and return probes are implemented on the following
architectures:
- i386
- x86_64 (AMD-64, E64MT)
- x86_64 (AMD-64, EM64T)
- ppc64
- ia64 (Support for probes on certain instruction types is still in progress.)
- ia64 (Does not support probes on instruction slot1.)
- sparc64 (Return probes not yet implemented.)
3. Configuring Kprobes
When configuring the kernel using make menuconfig/xconfig/oldconfig,
ensure that CONFIG_KPROBES is set to "y". Under "Kernel hacking",
look for "Kprobes". You may have to enable "Kernel debugging"
(CONFIG_DEBUG_KERNEL) before you can enable Kprobes.
ensure that CONFIG_KPROBES is set to "y". Under "Instrumentation
Support", look for "Kprobes".
So that you can load and unload Kprobes-based instrumentation modules,
make sure "Loadable module support" (CONFIG_MODULES) and "Module
unloading" (CONFIG_MODULE_UNLOAD) are set to "y".
You may also want to ensure that CONFIG_KALLSYMS and perhaps even
CONFIG_KALLSYMS_ALL are set to "y", since kallsyms_lookup_name()
@ -262,18 +265,18 @@ at any time after the probe has been registered.
5. Kprobes Features and Limitations
As of Linux v2.6.12, Kprobes allows multiple probes at the same
address. Currently, however, there cannot be multiple jprobes on
the same function at the same time.
Kprobes allows multiple probes at the same address. Currently,
however, there cannot be multiple jprobes on the same function at
the same time.
In general, you can install a probe anywhere in the kernel.
In particular, you can probe interrupt handlers. Known exceptions
are discussed in this section.
For obvious reasons, it's a bad idea to install a probe in
the code that implements Kprobes (mostly kernel/kprobes.c and
arch/*/kernel/kprobes.c). A patch in the v2.6.13 timeframe instructs
Kprobes to reject such requests.
The register_*probe functions will return -EINVAL if you attempt
to install a probe in the code that implements Kprobes (mostly
kernel/kprobes.c and arch/*/kernel/kprobes.c, but also functions such
as do_page_fault and notifier_call_chain).
If you install a probe in an inline-able function, Kprobes makes
no attempt to chase down all inline instances of the function and
@ -290,18 +293,14 @@ from the accidental ones. Don't drink and probe.
Kprobes makes no attempt to prevent probe handlers from stepping on
each other -- e.g., probing printk() and then calling printk() from a
probe handler. As of Linux v2.6.12, if a probe handler hits a probe,
that second probe's handlers won't be run in that instance.
probe handler. If a probe handler hits a probe, that second probe's
handlers won't be run in that instance, and the kprobe.nmissed member
of the second probe will be incremented.
In Linux v2.6.12 and previous versions, Kprobes' data structures are
protected by a single lock that is held during probe registration and
unregistration and while handlers are run. Thus, no two handlers
can run simultaneously. To improve scalability on SMP systems,
this restriction will probably be removed soon, in which case
multiple handlers (or multiple instances of the same handler) may
run concurrently on different CPUs. Code your handlers accordingly.
As of Linux v2.6.15-rc1, multiple handlers (or multiple instances of
the same handler) may run concurrently on different CPUs.
Kprobes does not use semaphores or allocate memory except during
Kprobes does not use mutexes or allocate memory except during
registration and unregistration.
Probe handlers are run with preemption disabled. Depending on the
@ -316,11 +315,18 @@ address instead of the real return address for kretprobed functions.
(As far as we can tell, __builtin_return_address() is used only
for instrumentation and error reporting.)
If the number of times a function is called does not match the
number of times it returns, registering a return probe on that
function may produce undesirable results. We have the do_exit()
and do_execve() cases covered. do_fork() is not an issue. We're
unaware of other specific cases where this could be a problem.
If the number of times a function is called does not match the number
of times it returns, registering a return probe on that function may
produce undesirable results. We have the do_exit() case covered.
do_execve() and do_fork() are not an issue. We're unaware of other
specific cases where this could be a problem.
If, upon entry to or exit from a function, the CPU is running on
a stack other than that of the current task, registering a return
probe on that function may produce undesirable results. For this
reason, Kprobes doesn't support return probes (or kprobes or jprobes)
on the x86_64 version of __switch_to(); the registration functions
return -EINVAL.
6. Probe Overhead
@ -347,14 +353,12 @@ k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99
7. TODO
a. SystemTap (http://sourceware.org/systemtap): Work in progress
to provide a simplified programming interface for probe-based
instrumentation.
b. Improved SMP scalability: Currently, work is in progress to handle
multiple kprobes in parallel.
c. Kernel return probes for sparc64.
d. Support for other architectures.
e. User-space probes.
a. SystemTap (http://sourceware.org/systemtap): Provides a simplified
programming interface for probe-based instrumentation. Try it out.
b. Kernel return probes for sparc64.
c. Support for other architectures.
d. User-space probes.
e. Watchpoint probes (which fire on data references).
8. Kprobes Example
@ -411,8 +415,7 @@ int init_module(void)
printk("Couldn't find %s to plant kprobe\n", "do_fork");
return -1;
}
ret = register_kprobe(&kp);
if (ret < 0) {
if ((ret = register_kprobe(&kp) < 0)) {
printk("register_kprobe failed, returned %d\n", ret);
return -1;
}

View File

@ -95,11 +95,13 @@ CONFIG_BLK_DEV_IDEDMA_PCI=y
CONFIG_IDEDMA_PCI_AUTO=y
CONFIG_BLK_DEV_IDE_AU1XXX=y
CONFIG_BLK_DEV_IDE_AU1XXX_MDMA2_DBDMA=y
CONFIG_BLK_DEV_IDE_AU1XXX_BURSTABLE_ON=y
CONFIG_BLK_DEV_IDE_AU1XXX_SEQTS_PER_RQ=128
CONFIG_BLK_DEV_IDEDMA=y
CONFIG_IDEDMA_AUTO=y
Also define 'IDE_AU1XXX_BURSTMODE' in 'drivers/ide/mips/au1xxx-ide.c' to enable
the burst support on DBDMA controller.
If the used system need the USB support enable the following kernel configs for
high IDE to USB throughput.
@ -115,6 +117,8 @@ CONFIG_BLK_DEV_IDE_AU1XXX_SEQTS_PER_RQ=128
CONFIG_BLK_DEV_IDEDMA=y
CONFIG_IDEDMA_AUTO=y
Also undefine 'IDE_AU1XXX_BURSTMODE' in 'drivers/ide/mips/au1xxx-ide.c' to
disable the burst support on DBDMA controller.
ADD NEW HARD DISC TO WHITE OR BLACK LIST
----------------------------------------

View File

@ -1,3 +1,26 @@
1 Release Date : Wed Feb 03 14:31:44 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
3 Older Version : 00.00.02.04
i. Support for 1078 type (ppc IOP) controller, device id : 0x60 added.
During initialization, depending on the device id, the template members
are initialized with function pointers specific to the ppc or
xscale controllers.
-Sumant Patro <Sumant.Patro@lsil.com>
1 Release Date : Fri Feb 03 14:16:25 PST 2006 - Sumant Patro
<Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
3 Older Version : 00.00.02.02
i. Register 16 byte CDB capability with scsi midlayer
"Ths patch properly registers the 16 byte command length capability of the
megaraid_sas controlled hardware with the scsi midlayer. All megaraid_sas
hardware supports 16 byte CDB's."
-Joshua Giles <joshua_giles@dell.com>
1 Release Date : Mon Jan 23 14:09:01 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.02
3 Older Version : 00.00.02.01

View File

@ -16,6 +16,7 @@ before actually making adjustments.
Currently, these files might (depending on your configuration)
show up in /proc/sys/kernel:
- acpi_video_flags
- acct
- core_pattern
- core_uses_pid
@ -57,6 +58,15 @@ show up in /proc/sys/kernel:
==============================================================
acpi_video_flags:
flags
See Doc*/kernel/power/video.txt, it allows mode of video boot to be
set during run time.
==============================================================
acct:
highwater lowwater frequency

View File

@ -2232,7 +2232,23 @@ P: Martin Schwidefsky
M: schwidefsky@de.ibm.com
M: linux390@de.ibm.com
L: linux-390@vm.marist.edu
W: http://oss.software.ibm.com/developerworks/opensource/linux390
W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported
S390 NETWORK DRIVERS
P: Frank Pavlic
M: fpavlic@de.ibm.com
M: linux390@de.ibm.com
L: linux-390@vm.marist.edu
W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported
S390 ZFCP DRIVER
P: Andreas Herrmann
M: aherrman@de.ibm.com
M: linux390@de.ibm.com
L: linux-390@vm.marist.edu
W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported
SAA7146 VIDEO4LINUX-2 DRIVER

View File

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 16
EXTRAVERSION =-rc2
EXTRAVERSION =-rc4
NAME=Sliding Snow Leopard
# *DOCUMENTATION*
@ -106,13 +106,12 @@ KBUILD_OUTPUT := $(shell cd $(KBUILD_OUTPUT) && /bin/pwd)
$(if $(KBUILD_OUTPUT),, \
$(error output directory "$(saved-output)" does not exist))
.PHONY: $(MAKECMDGOALS) cdbuilddir
$(MAKECMDGOALS) _all: cdbuilddir
.PHONY: $(MAKECMDGOALS)
cdbuilddir:
$(filter-out _all,$(MAKECMDGOALS)) _all:
$(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \
KBUILD_SRC=$(CURDIR) \
KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile $(MAKECMDGOALS)
KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile $@
# Leave processing to above invocation of make
skip-makefile := 1

View File

@ -128,19 +128,27 @@ EXPORT_SYMBOL(rtc_tm_to_time);
/*
* Calculate the next alarm time given the requested alarm time mask
* and the current time.
*
* FIXME: for now, we just copy the alarm time because we're lazy (and
* is therefore buggy - setting a 10am alarm at 8pm will not result in
* the alarm triggering.)
*/
void rtc_next_alarm_time(struct rtc_time *next, struct rtc_time *now, struct rtc_time *alrm)
{
unsigned long next_time;
unsigned long now_time;
next->tm_year = now->tm_year;
next->tm_mon = now->tm_mon;
next->tm_mday = now->tm_mday;
next->tm_hour = alrm->tm_hour;
next->tm_min = alrm->tm_min;
next->tm_sec = alrm->tm_sec;
rtc_tm_to_time(now, &now_time);
rtc_tm_to_time(next, &next_time);
if (next_time < now_time) {
/* Advance one day */
next_time += 60 * 60 * 24;
rtc_time_to_tm(next_time, next);
}
}
static inline int rtc_read_time(struct rtc_ops *ops, struct rtc_time *tm)

View File

@ -111,7 +111,7 @@
CALL(sys_statfs)
/* 100 */ CALL(sys_fstatfs)
CALL(sys_ni_syscall)
CALL(OBSOLETE(sys_socketcall))
CALL(OBSOLETE(ABI(sys_socketcall, sys_oabi_socketcall)))
CALL(sys_syslog)
CALL(sys_setitimer)
/* 105 */ CALL(sys_getitimer)

View File

@ -566,7 +566,7 @@ ENTRY(__switch_to)
ldr r6, [r2, #TI_CPU_DOMAIN]!
#endif
#if __LINUX_ARM_ARCH__ >= 6
#ifdef CONFIG_CPU_MPCORE
#ifdef CONFIG_CPU_32v6K
clrex
#else
strex r5, r4, [ip] @ Clear exclusive monitor

View File

@ -23,6 +23,7 @@
#include <linux/root_dev.h>
#include <linux/cpu.h>
#include <linux/interrupt.h>
#include <linux/smp.h>
#include <asm/cpu.h>
#include <asm/elf.h>
@ -771,6 +772,10 @@ void __init setup_arch(char **cmdline_p)
paging_init(&meminfo, mdesc);
request_standard_resources(&meminfo, mdesc);
#ifdef CONFIG_SMP
smp_init_cpus();
#endif
cpu_init();
/*

View File

@ -338,7 +338,6 @@ void __init smp_prepare_boot_cpu(void)
per_cpu(cpu_data, cpu).idle = current;
cpu_set(cpu, cpu_possible_map);
cpu_set(cpu, cpu_present_map);
cpu_set(cpu, cpu_online_map);
}

View File

@ -64,6 +64,7 @@
* sys_connect:
* sys_sendmsg:
* sys_sendto:
* sys_socketcall:
*
* struct sockaddr_un loses its padding with EABI. Since the size of the
* structure is used as a validation test in unix_mkname(), we need to
@ -78,6 +79,7 @@
#include <linux/eventpoll.h>
#include <linux/sem.h>
#include <linux/socket.h>
#include <linux/net.h>
#include <asm/ipc.h>
#include <asm/uaccess.h>
@ -408,3 +410,31 @@ asmlinkage long sys_oabi_sendmsg(int fd, struct msghdr __user *msg, unsigned fla
return sys_sendmsg(fd, msg, flags);
}
asmlinkage long sys_oabi_socketcall(int call, unsigned long __user *args)
{
unsigned long r = -EFAULT, a[6];
switch (call) {
case SYS_BIND:
if (copy_from_user(a, args, 3 * sizeof(long)) == 0)
r = sys_oabi_bind(a[0], (struct sockaddr __user *)a[1], a[2]);
break;
case SYS_CONNECT:
if (copy_from_user(a, args, 3 * sizeof(long)) == 0)
r = sys_oabi_connect(a[0], (struct sockaddr __user *)a[1], a[2]);
break;
case SYS_SENDTO:
if (copy_from_user(a, args, 6 * sizeof(long)) == 0)
r = sys_oabi_sendto(a[0], (void __user *)a[1], a[2], a[3],
(struct sockaddr __user *)a[4], a[5]);
break;
case SYS_SENDMSG:
if (copy_from_user(a, args, 3 * sizeof(long)) == 0)
r = sys_oabi_sendmsg(a[0], (struct msghdr __user *)a[1], a[2]);
break;
default:
r = sys_socketcall(call, args);
}
return r;
}

View File

@ -19,6 +19,7 @@
#include <linux/personality.h>
#include <linux/ptrace.h>
#include <linux/kallsyms.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <asm/atomic.h>
@ -231,6 +232,13 @@ NORET_TYPE void die(const char *str, struct pt_regs *regs, int err)
__die(str, err, thread, regs);
bust_spinlocks(0);
spin_unlock_irq(&die_lock);
if (panic_on_oops) {
printk(KERN_EMERG "Fatal exception: panic in 5 seconds\n");
ssleep(5);
panic("Fatal exception");
}
do_exit(SIGSEGV);
}

View File

@ -100,8 +100,10 @@ void __init at91_add_device_udc(struct at91_udc_data *data)
at91_set_gpio_input(data->vbus_pin, 0);
at91_set_deglitch(data->vbus_pin, 1);
}
if (data->pullup_pin)
if (data->pullup_pin) {
at91_set_gpio_output(data->pullup_pin, 0);
at91_set_multi_drive(data->pullup_pin, 1);
}
udc_data = *data;
platform_device_register(&at91rm9200_udc_device);

View File

@ -159,6 +159,23 @@ int __init_or_module at91_set_deglitch(unsigned pin, int is_on)
}
EXPORT_SYMBOL(at91_set_deglitch);
/*
* enable/disable the multi-driver; This is only valid for output and
* allows the output pin to run as an open collector output.
*/
int __init_or_module at91_set_multi_drive(unsigned pin, int is_on)
{
void __iomem *pio = pin_to_controller(pin);
unsigned mask = pin_to_mask(pin);
if (!pio)
return -EINVAL;
__raw_writel(mask, pio + (is_on ? PIO_MDER : PIO_MDDR));
return 0;
}
EXPORT_SYMBOL(at91_set_multi_drive);
/*--------------------------------------------------------------------------*/

View File

@ -140,6 +140,18 @@ static void __init poke_milo(void)
mb();
}
/*
* Initialise the CPU possible map early - this describes the CPUs
* which may be present or become present in the system.
*/
void __init smp_init_cpus(void)
{
unsigned int i, ncores = get_core_count();
for (i = 0; i < ncores; i++)
cpu_set(i, cpu_possible_map);
}
void __init smp_prepare_cpus(unsigned int max_cpus)
{
unsigned int ncores = get_core_count();
@ -176,14 +188,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
max_cpus = ncores;
/*
* Initialise the possible/present maps.
* cpu_possible_map describes the set of CPUs which may be present
* cpu_present_map describes the set of CPUs populated
* Initialise the present map, which describes the set of CPUs
* actually populated at the present time.
*/
for (i = 0; i < max_cpus; i++) {
cpu_set(i, cpu_possible_map);
for (i = 0; i < max_cpus; i++)
cpu_set(i, cpu_present_map);
}
/*
* Do we need any more CPUs? If so, then let them know where

View File

@ -13,7 +13,6 @@
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/config.h>
#include <linux/init.h>
#include <linux/major.h>
#include <linux/fs.h>
#include <linux/platform_device.h>

View File

@ -12,7 +12,6 @@
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/config.h>
#include <linux/init.h>
#include <linux/major.h>
#include <linux/fs.h>
#include <linux/platform_device.h>

View File

@ -111,24 +111,30 @@ static int ixp4xx_set_irq_type(unsigned int irq, unsigned int type)
if (line < 0)
return -EINVAL;
if (type & IRQT_BOTHEDGE) {
switch (type){
case IRQT_BOTHEDGE:
int_style = IXP4XX_GPIO_STYLE_TRANSITIONAL;
irq_type = IXP4XX_IRQ_EDGE;
} else if (type & IRQT_RISING) {
break;
case IRQT_RISING:
int_style = IXP4XX_GPIO_STYLE_RISING_EDGE;
irq_type = IXP4XX_IRQ_EDGE;
} else if (type & IRQT_FALLING) {
break;
case IRQT_FALLING:
int_style = IXP4XX_GPIO_STYLE_FALLING_EDGE;
irq_type = IXP4XX_IRQ_EDGE;
} else if (type & IRQT_HIGH) {
break;
case IRQT_HIGH:
int_style = IXP4XX_GPIO_STYLE_ACTIVE_HIGH;
irq_type = IXP4XX_IRQ_LEVEL;
} else if (type & IRQT_LOW) {
break;
case IRQT_LOW:
int_style = IXP4XX_GPIO_STYLE_ACTIVE_LOW;
irq_type = IXP4XX_IRQ_LEVEL;
} else
break;
default:
return -EINVAL;
}
ixp4xx_config_irq(irq, irq_type);
if (line >= 8) { /* pins 8-15 */

View File

@ -77,6 +77,9 @@ static int __init nslu2_power_init(void)
static void __exit nslu2_power_exit(void)
{
if (!(machine_is_nslu2()))
return;
free_irq(NSLU2_RB_IRQ, NULL);
free_irq(NSLU2_PB_IRQ, NULL);
}

View File

@ -27,8 +27,6 @@ static struct flash_platform_data nslu2_flash_data = {
};
static struct resource nslu2_flash_resource = {
.start = NSLU2_FLASH_BASE,
.end = NSLU2_FLASH_BASE + NSLU2_FLASH_SIZE,
.flags = IORESOURCE_MEM,
};
@ -52,6 +50,12 @@ static struct platform_device nslu2_i2c_controller = {
.num_resources = 0,
};
static struct platform_device nslu2_beeper = {
.name = "ixp4xx-beeper",
.id = NSLU2_GPIO_BUZZ,
.num_resources = 0,
};
static struct resource nslu2_uart_resources[] = {
{
.start = IXP4XX_UART1_BASE_PHYS,
@ -99,6 +103,7 @@ static struct platform_device *nslu2_devices[] __initdata = {
&nslu2_i2c_controller,
&nslu2_flash,
&nslu2_uart,
&nslu2_beeper,
};
static void nslu2_power_off(void)
@ -116,6 +121,10 @@ static void __init nslu2_init(void)
{
ixp4xx_sys_init();
nslu2_flash_resource.start = IXP4XX_EXP_BUS_BASE(0);
nslu2_flash_resource.end =
IXP4XX_EXP_BUS_BASE(0) + ixp4xx_exp_bus_size - 1;
pm_power_off = nslu2_power_off;
platform_add_devices(nslu2_devices, ARRAY_SIZE(nslu2_devices));

View File

@ -143,6 +143,18 @@ static void __init poke_milo(void)
mb();
}
/*
* Initialise the CPU possible map early - this describes the CPUs
* which may be present or become present in the system.
*/
void __init smp_init_cpus(void)
{
unsigned int i, ncores = get_core_count();
for (i = 0; i < ncores; i++)
cpu_set(i, cpu_possible_map);
}
void __init smp_prepare_cpus(unsigned int max_cpus)
{
unsigned int ncores = get_core_count();
@ -179,14 +191,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
local_timer_setup(cpu);
/*
* Initialise the possible/present maps.
* cpu_possible_map describes the set of CPUs which may be present
* cpu_present_map describes the set of CPUs populated
* Initialise the present map, which describes the set of CPUs
* actually populated at the present time.
*/
for (i = 0; i < max_cpus; i++) {
cpu_set(i, cpu_possible_map);
for (i = 0; i < max_cpus; i++)
cpu_set(i, cpu_present_map);
}
/*
* Do we need any more CPUs? If so, then let them know where

View File

@ -46,10 +46,11 @@
#include <asm/irq.h>
#include <asm/mach-types.h>
//#include <asm/debug-ll.h>
#include <asm/arch/regs-serial.h>
#include <asm/arch/regs-lcd.h>
#include <asm/arch/h1940-latch.h>
#include <asm/arch/fb.h>
#include <linux/serial_core.h>
@ -59,7 +60,12 @@
#include "cpu.h"
static struct map_desc h1940_iodesc[] __initdata = {
/* nothing here yet */
[0] = {
.virtual = (unsigned long)H1940_LATCH,
.pfn = __phys_to_pfn(H1940_PA_LATCH),
.length = SZ_16K,
.type = MT_DEVICE
},
};
#define UCON S3C2410_UCON_DEFAULT | S3C2410_UCON_UCLK
@ -92,6 +98,25 @@ static struct s3c2410_uartcfg h1940_uartcfgs[] = {
}
};
/* Board control latch control */
static unsigned int latch_state = H1940_LATCH_DEFAULT;
void h1940_latch_control(unsigned int clear, unsigned int set)
{
unsigned long flags;
local_irq_save(flags);
latch_state &= ~clear;
latch_state |= set;
__raw_writel(latch_state, H1940_LATCH);
local_irq_restore(flags);
}
EXPORT_SYMBOL_GPL(h1940_latch_control);
/**

View File

@ -0,0 +1,31 @@
/* arch/arm/mach-s3c2410/s3c2400.h
*
* Copyright (c) 2004 Simtec Electronics
* Ben Dooks <ben@simtec.co.uk>
*
* Header file for S3C2400 cpu support
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Modifications:
* 09-Fev-2006 LCVR First version, based on s3c2410.h
*/
#ifdef CONFIG_CPU_S3C2400
extern int s3c2400_init(void);
extern void s3c2400_map_io(struct map_desc *mach_desc, int size);
extern void s3c2400_init_uarts(struct s3c2410_uartcfg *cfg, int no);
extern void s3c2400_init_clocks(int xtal);
#else
#define s3c2400_init_clocks NULL
#define s3c2400_init_uarts NULL
#define s3c2400_map_io NULL
#define s3c2400_init NULL
#endif

View File

@ -240,6 +240,14 @@ int __init pci_versatile_setup(int nr, struct pci_sys_data *sys)
int i;
int myslot = -1;
unsigned long val;
void __iomem *local_pci_cfg_base;
val = __raw_readl(SYS_PCICTL);
if (!(val & 1)) {
printk("Not plugged into PCI backplane!\n");
ret = -EIO;
goto out;
}
if (nr == 0) {
sys->mem_offset = 0;
@ -253,48 +261,45 @@ int __init pci_versatile_setup(int nr, struct pci_sys_data *sys)
goto out;
}
__raw_writel(VERSATILE_PCI_MEM_BASE0 >> 28,PCI_IMAP0);
__raw_writel(VERSATILE_PCI_MEM_BASE1 >> 28,PCI_IMAP1);
__raw_writel(VERSATILE_PCI_MEM_BASE2 >> 28,PCI_IMAP2);
__raw_writel(1, SYS_PCICTL);
val = __raw_readl(SYS_PCICTL);
if (!(val & 1)) {
printk("Not plugged into PCI backplane!\n");
ret = -EIO;
goto out;
}
/*
* We need to discover the PCI core first to configure itself
* before the main PCI probing is performed
*/
for (i=0; i<32; i++) {
for (i=0; i<32; i++)
if ((__raw_readl(VERSATILE_PCI_VIRT_BASE+(i<<11)+DEVICE_ID_OFFSET) == VP_PCI_DEVICE_ID) &&
(__raw_readl(VERSATILE_PCI_VIRT_BASE+(i<<11)+CLASS_ID_OFFSET) == VP_PCI_CLASS_ID)) {
myslot = i;
__raw_writel(myslot, PCI_SELFID);
val = __raw_readl(VERSATILE_PCI_CFG_VIRT_BASE+(myslot<<11)+CSR_OFFSET);
val |= (1<<2);
__raw_writel(val, VERSATILE_PCI_CFG_VIRT_BASE+(myslot<<11)+CSR_OFFSET);
break;
}
}
if (myslot == -1) {
printk("Cannot find PCI core!\n");
ret = -EIO;
} else {
printk("PCI core found (slot %d)\n",myslot);
/* Do not to map Versatile FPGA PCI device
into memory space as we are short of
mappable memory */
pci_slot_ignore |= (1 << myslot);
ret = 1;
goto out;
}
printk("PCI core found (slot %d)\n",myslot);
__raw_writel(myslot, PCI_SELFID);
local_pci_cfg_base = (void *) VERSATILE_PCI_CFG_VIRT_BASE + (myslot << 11);
val = __raw_readl(local_pci_cfg_base + CSR_OFFSET);
val |= PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | PCI_COMMAND_INVALIDATE;
__raw_writel(val, local_pci_cfg_base + CSR_OFFSET);
/*
* Configure the PCI inbound memory windows to be 1:1 mapped to SDRAM
*/
__raw_writel(PHYS_OFFSET, local_pci_cfg_base + PCI_BASE_ADDRESS_0);
__raw_writel(PHYS_OFFSET, local_pci_cfg_base + PCI_BASE_ADDRESS_1);
__raw_writel(PHYS_OFFSET, local_pci_cfg_base + PCI_BASE_ADDRESS_2);
/*
* Do not to map Versatile FPGA PCI device into memory space
*/
pci_slot_ignore |= (1 << myslot);
ret = 1;
out:
return ret;
}
@ -305,18 +310,18 @@ struct pci_bus *pci_versatile_scan_bus(int nr, struct pci_sys_data *sys)
return pci_scan_bus(sys->busnr, &pci_versatile_ops, sys);
}
/*
* V3_LB_BASE? - local bus address
* V3_LB_MAP? - pci bus address
*/
void __init pci_versatile_preinit(void)
{
}
__raw_writel(VERSATILE_PCI_MEM_BASE0 >> 28, PCI_IMAP0);
__raw_writel(VERSATILE_PCI_MEM_BASE1 >> 28, PCI_IMAP1);
__raw_writel(VERSATILE_PCI_MEM_BASE2 >> 28, PCI_IMAP2);
void __init pci_versatile_postinit(void)
{
}
__raw_writel(PHYS_OFFSET >> 28, PCI_SMAP0);
__raw_writel(PHYS_OFFSET >> 28, PCI_SMAP1);
__raw_writel(PHYS_OFFSET >> 28, PCI_SMAP2);
__raw_writel(1, SYS_PCICTL);
}
/*
* map the specified device/slot/pin to an IRQ. Different backplanes may need to modify this.
@ -326,16 +331,15 @@ static int __init versatile_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
int irq;
int devslot = PCI_SLOT(dev->devfn);
/* slot, pin, irq
24 1 27
25 1 28 untested
26 1 29
27 1 30 untested
*/
/* slot, pin, irq
* 24 1 27
* 25 1 28
* 26 1 29
* 27 1 30
*/
irq = 27 + ((slot + pin - 1) & 3);
irq = 27 + ((slot + pin + 2) % 3); /* Fudged */
printk("map irq: slot %d, pin %d, devslot %d, irq: %d\n",slot,pin,devslot,irq);
printk("PCI map irq: slot %d, pin %d, devslot %d, irq: %d\n",slot,pin,devslot,irq);
return irq;
}
@ -347,7 +351,6 @@ static struct hw_pci versatile_pci __initdata = {
.setup = pci_versatile_setup,
.scan = pci_versatile_scan_bus,
.preinit = pci_versatile_preinit,
.postinit = pci_versatile_postinit,
};
static int __init versatile_pci_init(void)

View File

@ -20,7 +20,7 @@
*/
.align 5
ENTRY(v6_early_abort)
#ifdef CONFIG_CPU_MPCORE
#ifdef CONFIG_CPU_32v6K
clrex
#else
strex r0, r1, [sp] @ Clear the exclusive monitor

View File

@ -38,7 +38,6 @@
#include <linux/pm.h>
#include <linux/sched.h>
#include <linux/proc_fs.h>
#include <linux/pm.h>
#include <linux/interrupt.h>
#include <asm/io.h>

View File

@ -12,7 +12,7 @@
#
# http://www.arm.linux.org.uk/developer/machines/?action=new
#
# Last update: Mon Jan 9 12:56:42 2006
# Last update: Mon Feb 20 10:18:02 2006
#
# machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number
#
@ -904,7 +904,7 @@ wg302v2 MACH_WG302V2 WG302V2 890
eb42x MACH_EB42X EB42X 891
iq331es MACH_IQ331ES IQ331ES 892
cosydsp MACH_COSYDSP COSYDSP 893
uplat7d MACH_UPLAT7D UPLAT7D 894
uplat7d_proto MACH_UPLAT7D UPLAT7D 894
ptdavinci MACH_PTDAVINCI PTDAVINCI 895
mbus MACH_MBUS MBUS 896
nadia2vb MACH_NADIA2VB NADIA2VB 897
@ -938,3 +938,34 @@ auckland MACH_AUCKLAND AUCKLAND 924
ak3220m MACH_AK3320M AK3320M 925
duramax MACH_DURAMAX DURAMAX 926
n35 MACH_N35 N35 927
pronghorn MACH_PRONGHORN PRONGHORN 928
fundy MACH_FUNDY FUNDY 929
logicpd_pxa270 MACH_LOGICPD_PXA270 LOGICPD_PXA270 930
cpu777 MACH_CPU777 CPU777 931
simicon9201 MACH_SIMICON9201 SIMICON9201 932
leap2_hpm MACH_LEAP2_HPM LEAP2_HPM 933
cm922txa10 MACH_CM922TXA10 CM922TXA10 934
sandgate MACH_PXA PXA 935
sandgate2 MACH_SANDGATE2 SANDGATE2 936
sandgate2g MACH_SANDGATE2G SANDGATE2G 937
sandgate2p MACH_SANDGATE2P SANDGATE2P 938
fred_jack MACH_FRED_JACK FRED_JACK 939
ttg_color1 MACH_TTG_COLOR1 TTG_COLOR1 940
nxeb500hmi MACH_NXEB500HMI NXEB500HMI 941
netdcu8 MACH_NETDCU8 NETDCU8 942
ml675050_cpu_boa MACH_ML675050_CPU_BOA ML675050_CPU_BOA 943
ng_fvx538 MACH_NG_FVX538 NG_FVX538 944
ng_fvs338 MACH_NG_FVS338 NG_FVS338 945
pnx4103 MACH_PNX4103 PNX4103 946
hesdb MACH_HESDB HESDB 947
xsilo MACH_XSILO XSILO 948
espresso MACH_ESPRESSO ESPRESSO 949
emlc MACH_EMLC EMLC 950
sisteron MACH_SISTERON SISTERON 951
rx1950 MACH_RX1950 RX1950 952
tsc_venus MACH_TSC_VENUS TSC_VENUS 953
ds101j MACH_DS101J DS101J 954
mxc300_30ads MACH_MXC30030ADS MXC30030ADS 955
fujitsu_wimaxsoc MACH_FUJITSU_WIMAXSOC FUJITSU_WIMAXSOC 956
dualpcmodem MACH_DUALPCMODEM DUALPCMODEM 957
gesbc9312 MACH_GESBC9312 GESBC9312 958

View File

@ -25,6 +25,10 @@ config GENERIC_HARDIRQS
bool
default n
config TIME_LOW_RES
bool
default y
mainmenu "Fujitsu FR-V Kernel Configuration"
source "init/Kconfig"

View File

@ -81,7 +81,7 @@ endif
# - reserve CC3 for use with atomic ops
# - all the extra registers are dealt with only at context switch time
CFLAGS += -mno-fdpic -mgpr-32 -msoft-float -mno-media
CFLAGS += -ffixed-fcc3 -ffixed-cc3 -ffixed-gr15
CFLAGS += -ffixed-fcc3 -ffixed-cc3 -ffixed-gr15 -ffixed-icc2
AFLAGS += -mno-fdpic
ASFLAGS += -mno-fdpic

View File

@ -200,12 +200,20 @@ __break_step:
movsg bpcsr,gr2
sethi.p %hi(__entry_kernel_external_interrupt),gr3
setlo %lo(__entry_kernel_external_interrupt),gr3
subcc gr2,gr3,gr0,icc0
subcc.p gr2,gr3,gr0,icc0
sethi %hi(__entry_uspace_external_interrupt),gr3
setlo.p %lo(__entry_uspace_external_interrupt),gr3
beq icc0,#2,__break_step_kernel_external_interrupt
sethi.p %hi(__entry_uspace_external_interrupt),gr3
setlo %lo(__entry_uspace_external_interrupt),gr3
subcc gr2,gr3,gr0,icc0
subcc.p gr2,gr3,gr0,icc0
sethi %hi(__entry_kernel_external_interrupt_virtually_disabled),gr3
setlo.p %lo(__entry_kernel_external_interrupt_virtually_disabled),gr3
beq icc0,#2,__break_step_uspace_external_interrupt
subcc.p gr2,gr3,gr0,icc0
sethi %hi(__entry_kernel_external_interrupt_virtual_reenable),gr3
setlo.p %lo(__entry_kernel_external_interrupt_virtual_reenable),gr3
beq icc0,#2,__break_step_kernel_external_interrupt_virtually_disabled
subcc gr2,gr3,gr0,icc0
beq icc0,#2,__break_step_kernel_external_interrupt_virtual_reenable
LEDS 0x2007,gr2
@ -254,6 +262,9 @@ __break_step_kernel_softprog_interrupt:
# step through an external interrupt from kernel mode
.globl __break_step_kernel_external_interrupt
__break_step_kernel_external_interrupt:
# deal with virtual interrupt disablement
beq icc2,#0,__break_step_kernel_external_interrupt_virtually_disabled
sethi.p %hi(__entry_kernel_external_interrupt_reentry),gr3
setlo %lo(__entry_kernel_external_interrupt_reentry),gr3
@ -294,6 +305,64 @@ __break_return_as_kernel_prologue:
#endif
rett #1
# we single-stepped into an interrupt handler whilst interrupts were merely virtually disabled
# need to really disable interrupts, set flag, fix up and return
__break_step_kernel_external_interrupt_virtually_disabled:
movsg psr,gr2
andi gr2,#~PSR_PIL,gr2
ori gr2,#PSR_PIL_14,gr2 /* debugging interrupts only */
movgs gr2,psr
ldi @(gr31,#REG_CCR),gr3
movgs gr3,ccr
subcc.p gr0,gr0,gr0,icc2 /* leave Z set, clear C */
# exceptions must've been enabled and we must've been in supervisor mode
setlos BPSR_BET|BPSR_BS,gr3
movgs gr3,bpsr
# return to where the interrupt happened
movsg pcsr,gr2
movgs gr2,bpcsr
lddi.p @(gr31,#REG_GR(2)),gr2
xor gr31,gr31,gr31
movgs gr0,brr
#ifdef CONFIG_MMU
movsg scr3,gr31
#endif
rett #1
# we stepped through into the virtual interrupt reenablement trap
#
# we also want to single step anyway, but after fixing up so that we get an event on the
# instruction after the broken-into exception returns
.globl __break_step_kernel_external_interrupt_virtual_reenable
__break_step_kernel_external_interrupt_virtual_reenable:
movsg psr,gr2
andi gr2,#~PSR_PIL,gr2
movgs gr2,psr
ldi @(gr31,#REG_CCR),gr3
movgs gr3,ccr
subicc gr0,#1,gr0,icc2 /* clear Z, set C */
# save the adjusted ICC2
movsg ccr,gr3
sti gr3,@(gr31,#REG_CCR)
# exceptions must've been enabled and we must've been in supervisor mode
setlos BPSR_BET|BPSR_BS,gr3
movgs gr3,bpsr
# return to where the trap happened
movsg pcsr,gr2
movgs gr2,bpcsr
# and then process the single step
bra __break_continue
# step through an internal exception from uspace mode
.globl __break_step_uspace_softprog_interrupt
__break_step_uspace_softprog_interrupt:

View File

@ -116,6 +116,8 @@ __break_kerneltrap_fixup_table:
.long __break_step_uspace_external_interrupt
.section .trap.kernel
.org \tbr_tt
# deal with virtual interrupt disablement
beq icc2,#0,__entry_kernel_external_interrupt_virtually_disabled
bra __entry_kernel_external_interrupt
.section .trap.fixup.kernel
.org \tbr_tt >> 2
@ -259,25 +261,52 @@ __trap_fixup_kernel_data_tlb_miss:
.org TBR_TT_TRAP0
.rept 127
bra __entry_uspace_softprog_interrupt
bra __break_step_uspace_softprog_interrupt
.long 0,0
.long 0,0,0
.endr
.org TBR_TT_BREAK
bra __entry_break
.long 0,0,0
.section .trap.fixup.user
.org TBR_TT_TRAP0 >> 2
.rept 127
.long __break_step_uspace_softprog_interrupt
.endr
.org TBR_TT_BREAK >> 2
.long 0
# miscellaneous kernel mode entry points
.section .trap.kernel
.org TBR_TT_TRAP0
.rept 127
bra __entry_kernel_softprog_interrupt
bra __break_step_kernel_softprog_interrupt
.long 0,0
.org TBR_TT_TRAP1
bra __entry_kernel_softprog_interrupt
# trap #2 in kernel - reenable interrupts
.org TBR_TT_TRAP2
bra __entry_kernel_external_interrupt_virtual_reenable
# miscellaneous kernel traps
.org TBR_TT_TRAP3
.rept 124
bra __entry_kernel_softprog_interrupt
.long 0,0,0
.endr
.org TBR_TT_BREAK
bra __entry_break
.long 0,0,0
.section .trap.fixup.kernel
.org TBR_TT_TRAP0 >> 2
.long __break_step_kernel_softprog_interrupt
.long __break_step_kernel_softprog_interrupt
.long __break_step_kernel_external_interrupt_virtual_reenable
.rept 124
.long __break_step_kernel_softprog_interrupt
.endr
.org TBR_TT_BREAK >> 2
.long 0
# miscellaneous debug mode entry points
.section .trap.break
.org TBR_TT_BREAK

View File

@ -141,7 +141,10 @@ __entry_uspace_external_interrupt_reentry:
movsg gner0,gr4
movsg gner1,gr5
stdi gr4,@(gr28,#REG_GNER0)
stdi.p gr4,@(gr28,#REG_GNER0)
# interrupts start off fully disabled in the interrupt handler
subcc gr0,gr0,gr0,icc2 /* set Z and clear C */
# set up kernel global registers
sethi.p %hi(__kernel_current_task),gr5
@ -193,9 +196,8 @@ __entry_uspace_external_interrupt_reentry:
.type __entry_kernel_external_interrupt,@function
__entry_kernel_external_interrupt:
LEDS 0x6210
sub sp,gr15,gr31
LEDS32
// sub sp,gr15,gr31
// LEDS32
# set up the stack pointer
or.p sp,gr0,gr30
@ -231,7 +233,10 @@ __entry_kernel_external_interrupt_reentry:
stdi gr24,@(gr28,#REG_GR(24))
stdi gr26,@(gr28,#REG_GR(26))
sti gr29,@(gr28,#REG_GR(29))
stdi gr30,@(gr28,#REG_GR(30))
stdi.p gr30,@(gr28,#REG_GR(30))
# note virtual interrupts will be fully enabled upon return
subicc gr0,#1,gr0,icc2 /* clear Z, set C */
movsg tbr ,gr20
movsg psr ,gr22
@ -267,7 +272,10 @@ __entry_kernel_external_interrupt_reentry:
movsg gner0,gr4
movsg gner1,gr5
stdi gr4,@(gr28,#REG_GNER0)
stdi.p gr4,@(gr28,#REG_GNER0)
# interrupts start off fully disabled in the interrupt handler
subcc gr0,gr0,gr0,icc2 /* set Z and clear C */
# set the return address
sethi.p %hi(__entry_return_from_kernel_interrupt),gr4
@ -291,6 +299,45 @@ __entry_kernel_external_interrupt_reentry:
.size __entry_kernel_external_interrupt,.-__entry_kernel_external_interrupt
###############################################################################
#
# deal with interrupts that were actually virtually disabled
# - we need to really disable them, flag the fact and return immediately
# - if you change this, you must alter break.S also
#
###############################################################################
.balign L1_CACHE_BYTES
.globl __entry_kernel_external_interrupt_virtually_disabled
.type __entry_kernel_external_interrupt_virtually_disabled,@function
__entry_kernel_external_interrupt_virtually_disabled:
movsg psr,gr30
andi gr30,#~PSR_PIL,gr30
ori gr30,#PSR_PIL_14,gr30 ; debugging interrupts only
movgs gr30,psr
subcc gr0,gr0,gr0,icc2 ; leave Z set, clear C
rett #0
.size __entry_kernel_external_interrupt_virtually_disabled,.-__entry_kernel_external_interrupt_virtually_disabled
###############################################################################
#
# deal with re-enablement of interrupts that were pending when virtually re-enabled
# - set ICC2.C, re-enable the real interrupts and return
# - we can clear ICC2.Z because we shouldn't be here if it's not 0 [due to TIHI]
# - if you change this, you must alter break.S also
#
###############################################################################
.balign L1_CACHE_BYTES
.globl __entry_kernel_external_interrupt_virtual_reenable
.type __entry_kernel_external_interrupt_virtual_reenable,@function
__entry_kernel_external_interrupt_virtual_reenable:
movsg psr,gr30
andi gr30,#~PSR_PIL,gr30 ; re-enable interrupts
movgs gr30,psr
subicc gr0,#1,gr0,icc2 ; clear Z, set C
rett #0
.size __entry_kernel_external_interrupt_virtual_reenable,.-__entry_kernel_external_interrupt_virtual_reenable
###############################################################################
#
@ -335,6 +382,7 @@ __entry_uspace_softprog_interrupt_reentry:
sethi.p %hi(__entry_return_from_user_exception),gr23
setlo %lo(__entry_return_from_user_exception),gr23
bra __entry_common
.size __entry_uspace_softprog_interrupt,.-__entry_uspace_softprog_interrupt
@ -495,7 +543,10 @@ __entry_common:
movsg gner0,gr4
movsg gner1,gr5
stdi gr4,@(gr28,#REG_GNER0)
stdi.p gr4,@(gr28,#REG_GNER0)
# set up virtual interrupt disablement
subicc gr0,#1,gr0,icc2 /* clear Z flag, set C flag */
# set up kernel global registers
sethi.p %hi(__kernel_current_task),gr5
@ -1418,11 +1469,27 @@ sys_call_table:
.long sys_add_key
.long sys_request_key
.long sys_keyctl
.long sys_ni_syscall // sys_vperfctr_open
.long sys_ni_syscall // sys_vperfctr_control /* 290 */
.long sys_ni_syscall // sys_vperfctr_unlink
.long sys_ni_syscall // sys_vperfctr_iresume
.long sys_ni_syscall // sys_vperfctr_read
.long sys_ioprio_set
.long sys_ioprio_get /* 290 */
.long sys_inotify_init
.long sys_inotify_add_watch
.long sys_inotify_rm_watch
.long sys_migrate_pages
.long sys_openat /* 295 */
.long sys_mkdirat
.long sys_mknodat
.long sys_fchownat
.long sys_futimesat
.long sys_newfstatat /* 300 */
.long sys_unlinkat
.long sys_renameat
.long sys_linkat
.long sys_symlinkat
.long sys_readlinkat /* 305 */
.long sys_fchmodat
.long sys_faccessat
.long sys_pselect6
.long sys_ppoll
syscall_table_size = (. - sys_call_table)

View File

@ -513,6 +513,9 @@ __head_mmu_enabled:
movgs gr0,ccr
movgs gr0,cccr
# initialise the virtual interrupt handling
subcc gr0,gr0,gr0,icc2 /* set Z, clear C */
#ifdef CONFIG_MMU
movgs gr3,scr2
movgs gr3,scr3

View File

@ -287,18 +287,11 @@ asmlinkage void do_IRQ(void)
struct irq_source *source;
int level, cpu;
irq_enter();
level = (__frame->tbr >> 4) & 0xf;
cpu = smp_processor_id();
#if 0
{
static u32 irqcount;
*(volatile u32 *) 0xe1200004 = ~((irqcount++ << 8) | level);
*(volatile u16 *) 0xffc00100 = (u16) ~0x9999;
mb();
}
#endif
if ((unsigned long) __frame - (unsigned long) (current + 1) < 512)
BUG();
@ -308,40 +301,12 @@ asmlinkage void do_IRQ(void)
kstat_this_cpu.irqs[level]++;
irq_enter();
for (source = frv_irq_levels[level].sources; source; source = source->next)
source->doirq(source);
irq_exit();
__clr_MASK(level);
/* only process softirqs if we didn't interrupt another interrupt handler */
if ((__frame->psr & PSR_PIL) == PSR_PIL_0)
if (local_softirq_pending())
do_softirq();
#ifdef CONFIG_PREEMPT
local_irq_disable();
while (--current->preempt_count == 0) {
if (!(__frame->psr & PSR_S) ||
current->need_resched == 0 ||
in_interrupt())
break;
current->preempt_count++;
local_irq_enable();
preempt_schedule();
local_irq_disable();
}
#endif
#if 0
{
*(volatile u16 *) 0xffc00100 = (u16) ~0x6666;
mb();
}
#endif
irq_exit();
} /* end do_IRQ() */

View File

@ -43,15 +43,6 @@ void iounmap(void *addr)
{
}
/*
* __iounmap unmaps nearly everything, so be careful
* it doesn't free currently pointer/page tables anymore but it
* wans't used anyway and might be added later.
*/
void __iounmap(void *addr, unsigned long size)
{
}
/*
* Set new cache mode for some kernel address space.
* The caller must push data for that range itself, if such data may already

View File

@ -33,6 +33,10 @@ config GENERIC_CALIBRATE_DELAY
bool
default y
config TIME_LOW_RES
bool
default y
config ISA
bool
default y

View File

@ -169,7 +169,7 @@ endif
config CPU_H8300H
bool
depends on (H8002 || H83007 || H83048 || H83068)
depends on (H83002 || H83007 || H83048 || H83068)
default y
config CPU_H8S

View File

@ -34,7 +34,7 @@ config GDB_DEBUG
help
gdb stub exception support
config CONFIG_SH_STANDARD_BIOS
config SH_STANDARD_BIOS
bool "Use gdb protocol serial console"
depends on (!H8300H_SIM && !H8S_SIM)
help

View File

@ -328,7 +328,7 @@ CONFIG_FULLDEBUG=y
CONFIG_NO_KERNEL_MSG=y
# CONFIG_SYSCALL_PRINT is not set
# CONFIG_GDB_DEBUG is not set
# CONFIG_CONFIG_SH_STANDARD_BIOS is not set
# CONFIG_SH_STANDARD_BIOS is not set
# CONFIG_DEFAULT_CMDLINE is not set
# CONFIG_BLKDEV_RESERVE is not set

3
arch/i386/boot/.gitignore vendored 100644
View File

@ -0,0 +1,3 @@
bootsect
bzImage
setup

View File

@ -0,0 +1 @@
build

1
arch/i386/kernel/.gitignore vendored 100644
View File

@ -0,0 +1 @@
vsyscall.lds

View File

@ -1,4 +1,5 @@
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/init.h>
#include <asm/processor.h>
#include <asm/msr.h>

View File

@ -398,7 +398,11 @@ ignore_int:
pushl 32(%esp)
pushl 40(%esp)
pushl $int_msg
#ifdef CONFIG_EARLY_PRINTK
call early_printk
#else
call printk
#endif
addl $(5*4),%esp
popl %ds
popl %es

View File

@ -710,7 +710,7 @@ void __init get_smp_config (void)
* Read the physical hardware table. Anything here will
* override the defaults.
*/
if (!smp_read_mpc((void *)mpf->mpf_physptr)) {
if (!smp_read_mpc(phys_to_virt(mpf->mpf_physptr))) {
smp_found_config = 0;
printk(KERN_ERR "BIOS bug, MP table errors detected!...\n");
printk(KERN_ERR "... disabling SMP support. (tell your hw vendor)\n");

View File

@ -87,11 +87,7 @@ EXPORT_SYMBOL(cpu_online_map);
cpumask_t cpu_callin_map;
cpumask_t cpu_callout_map;
EXPORT_SYMBOL(cpu_callout_map);
#ifdef CONFIG_HOTPLUG_CPU
cpumask_t cpu_possible_map = CPU_MASK_ALL;
#else
cpumask_t cpu_possible_map;
#endif
EXPORT_SYMBOL(cpu_possible_map);
static cpumask_t smp_commenced_mask;

View File

@ -299,7 +299,7 @@ ENTRY(sys_call_table)
.long sys_mknodat
.long sys_fchownat
.long sys_futimesat
.long sys_newfstatat /* 300 */
.long sys_fstatat64 /* 300 */
.long sys_unlinkat
.long sys_renameat
.long sys_linkat

View File

@ -282,6 +282,10 @@ time_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
if (val != CPUFREQ_RESUMECHANGE)
write_seqlock_irq(&xtime_lock);
if (!ref_freq) {
if (!freq->old){
ref_freq = freq->new;
goto end;
}
ref_freq = freq->old;
loops_per_jiffy_ref = cpu_data[freq->cpu].loops_per_jiffy;
#ifndef CONFIG_SMP
@ -307,6 +311,7 @@ time_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
#endif
}
end:
if (val != CPUFREQ_RESUMECHANGE)
write_sequnlock_irq(&xtime_lock);

View File

@ -7,6 +7,21 @@
* for details.
*/
/*
* The caller puts arg2 in %ecx, which gets pushed. The kernel will use
* %ecx itself for arg2. The pushing is because the sysexit instruction
* (found in entry.S) requires that we clobber %ecx with the desired %esp.
* User code might expect that %ecx is unclobbered though, as it would be
* for returning via the iret instruction, so we must push and pop.
*
* The caller puts arg3 in %edx, which the sysexit instruction requires
* for %eip. Thus, exactly as for arg2, we must push and pop.
*
* Arg6 is different. The caller puts arg6 in %ebp. Since the sysenter
* instruction clobbers %esp, the user's %esp won't even survive entry
* into the kernel. We store %esp in %ebp. Code in entry.S must fetch
* arg6 from the stack.
*/
.text
.globl __kernel_vsyscall
.type __kernel_vsyscall,@function

View File

@ -240,7 +240,7 @@ static cpumask_t smp_commenced_mask = CPU_MASK_NONE;
cpumask_t cpu_callin_map = CPU_MASK_NONE;
cpumask_t cpu_callout_map = CPU_MASK_NONE;
EXPORT_SYMBOL(cpu_callout_map);
cpumask_t cpu_possible_map = CPU_MASK_ALL;
cpumask_t cpu_possible_map = CPU_MASK_NONE;
EXPORT_SYMBOL(cpu_possible_map);
/* The per processor IRQ masks (these are usually kept in sync) */

View File

@ -20,7 +20,20 @@ struct frame_head {
} __attribute__((packed));
static struct frame_head *
dump_backtrace(struct frame_head * head)
dump_kernel_backtrace(struct frame_head * head)
{
oprofile_add_trace(head->ret);
/* frame pointers should strictly progress back up the stack
* (towards higher addresses) */
if (head >= head->ebp)
return NULL;
return head->ebp;
}
static struct frame_head *
dump_user_backtrace(struct frame_head * head)
{
struct frame_head bufhead[2];
@ -105,10 +118,10 @@ x86_backtrace(struct pt_regs * const regs, unsigned int depth)
if (!user_mode_vm(regs)) {
while (depth-- && valid_kernel_stack(head, regs))
head = dump_backtrace(head);
head = dump_kernel_backtrace(head);
return;
}
while (depth-- && head)
head = dump_backtrace(head);
head = dump_user_backtrace(head);
}

View File

@ -761,6 +761,59 @@ int acpi_map_cpu2node(acpi_handle handle, int cpu, long physid)
return (0);
}
int additional_cpus __initdata = -1;
static __init int setup_additional_cpus(char *s)
{
if (s)
additional_cpus = simple_strtol(s, NULL, 0);
return 0;
}
early_param("additional_cpus", setup_additional_cpus);
/*
* cpu_possible_map should be static, it cannot change as cpu's
* are onlined, or offlined. The reason is per-cpu data-structures
* are allocated by some modules at init time, and dont expect to
* do this dynamically on cpu arrival/departure.
* cpu_present_map on the other hand can change dynamically.
* In case when cpu_hotplug is not compiled, then we resort to current
* behaviour, which is cpu_possible == cpu_present.
* - Ashok Raj
*
* Three ways to find out the number of additional hotplug CPUs:
* - If the BIOS specified disabled CPUs in ACPI/mptables use that.
* - The user can overwrite it with additional_cpus=NUM
* - Otherwise don't reserve additional CPUs.
*/
__init void prefill_possible_map(void)
{
int i;
int possible, disabled_cpus;
disabled_cpus = total_cpus - available_cpus;
if (additional_cpus == -1) {
if (disabled_cpus > 0)
additional_cpus = disabled_cpus;
else
additional_cpus = 0;
}
possible = available_cpus + additional_cpus;
if (possible > NR_CPUS)
possible = NR_CPUS;
printk(KERN_INFO "SMP: Allowing %d CPUs, %d hotplug CPUs\n",
possible, max((possible - available_cpus), 0));
for (i = 0; i < possible; i++)
cpu_set(i, cpu_possible_map);
}
int acpi_map_lsapic(acpi_handle handle, int *pcpu)
{
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };

View File

@ -569,7 +569,9 @@ GLOBAL_ENTRY(ia64_trace_syscall)
.mem.offset 0,0; st8.spill [r2]=r8 // store return value in slot for r8
.mem.offset 8,0; st8.spill [r3]=r10 // clear error indication in slot for r10
br.call.sptk.many rp=syscall_trace_leave // give parent a chance to catch return value
.ret3: br.cond.sptk .work_pending_syscall_end
.ret3:
(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
br.cond.sptk .work_pending_syscall_end
strace_error:
ld8 r3=[r2] // load pt_regs.r8

View File

@ -10,23 +10,8 @@
#include <linux/string.h>
EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memchr);
EXPORT_SYMBOL(memcmp);
EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(memscan);
EXPORT_SYMBOL(strcat);
EXPORT_SYMBOL(strchr);
EXPORT_SYMBOL(strcmp);
EXPORT_SYMBOL(strcpy);
EXPORT_SYMBOL(strlen);
EXPORT_SYMBOL(strncat);
EXPORT_SYMBOL(strncmp);
EXPORT_SYMBOL(strncpy);
EXPORT_SYMBOL(strnlen);
EXPORT_SYMBOL(strrchr);
EXPORT_SYMBOL(strstr);
EXPORT_SYMBOL(strpbrk);
#include <asm/checksum.h>
EXPORT_SYMBOL(ip_fast_csum); /* hand-coded assembly */

View File

@ -430,6 +430,7 @@ setup_arch (char **cmdline_p)
if (early_console_setup(*cmdline_p) == 0)
mark_bsp_online();
parse_early_param();
#ifdef CONFIG_ACPI
/* Initialize the ACPI boot-time table parser */
acpi_table_init();
@ -688,6 +689,9 @@ void
setup_per_cpu_areas (void)
{
/* start_kernel() requires this... */
#ifdef CONFIG_ACPI_HOTPLUG_CPU
prefill_possible_map();
#endif
}
/*

View File

@ -129,7 +129,7 @@ DEFINE_PER_CPU(int, cpu_state);
/* Bitmasks of currently online, and possible CPUs */
cpumask_t cpu_online_map;
EXPORT_SYMBOL(cpu_online_map);
cpumask_t cpu_possible_map;
cpumask_t cpu_possible_map = CPU_MASK_NONE;
EXPORT_SYMBOL(cpu_possible_map);
cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned;
@ -506,9 +506,6 @@ smp_build_cpu_map (void)
for (cpu = 0; cpu < NR_CPUS; cpu++) {
ia64_cpu_to_sapicid[cpu] = -1;
#ifdef CONFIG_HOTPLUG_CPU
cpu_set(cpu, cpu_possible_map);
#endif
}
ia64_cpu_to_sapicid[0] = boot_cpu_id;

View File

@ -250,32 +250,27 @@ time_init (void)
set_normalized_timespec(&wall_to_monotonic, -xtime.tv_sec, -xtime.tv_nsec);
}
#define SMALLUSECS 100
/*
* Generic udelay assumes that if preemption is allowed and the thread
* migrates to another CPU, that the ITC values are synchronized across
* all CPUs.
*/
static void
ia64_itc_udelay (unsigned long usecs)
{
unsigned long start = ia64_get_itc();
unsigned long end = start + usecs*local_cpu_data->cyc_per_usec;
while (time_before(ia64_get_itc(), end))
cpu_relax();
}
void (*ia64_udelay)(unsigned long usecs) = &ia64_itc_udelay;
void
udelay (unsigned long usecs)
{
unsigned long start;
unsigned long cycles;
unsigned long smallusecs;
/*
* Execute the non-preemptible delay loop (because the ITC might
* not be synchronized between CPUS) in relatively short time
* chunks, allowing preemption between the chunks.
*/
while (usecs > 0) {
smallusecs = (usecs > SMALLUSECS) ? SMALLUSECS : usecs;
preempt_disable();
cycles = smallusecs*local_cpu_data->cyc_per_usec;
start = ia64_get_itc();
while (ia64_get_itc() - start < cycles)
cpu_relax();
preempt_enable();
usecs -= smallusecs;
}
(*ia64_udelay)(usecs);
}
EXPORT_SYMBOL(udelay);

View File

@ -16,6 +16,7 @@
#include <linux/module.h> /* for EXPORT_SYMBOL */
#include <linux/hardirq.h>
#include <linux/kprobes.h>
#include <linux/delay.h> /* for ssleep() */
#include <asm/fpswa.h>
#include <asm/ia32.h>
@ -116,6 +117,13 @@ die (const char *str, struct pt_regs *regs, long err)
bust_spinlocks(0);
die.lock_owner = -1;
spin_unlock_irq(&die.lock);
if (panic_on_oops) {
printk(KERN_EMERG "Fatal exception: panic in 5 seconds\n");
ssleep(5);
panic("Fatal exception");
}
do_exit(SIGSEGV);
}

View File

@ -23,6 +23,10 @@
#include "xtalk/hubdev.h"
#include "xtalk/xwidgetdev.h"
extern void sn_init_cpei_timer(void);
extern void register_sn_procfs(void);
static struct list_head sn_sysdata_list;
/* sysdata list struct */
@ -40,12 +44,12 @@ struct brick {
struct slab_info slab_info[MAX_SLABS + 1];
};
int sn_ioif_inited = 0; /* SN I/O infrastructure initialized? */
int sn_ioif_inited; /* SN I/O infrastructure initialized? */
struct sn_pcibus_provider *sn_pci_provider[PCIIO_ASIC_MAX_TYPES]; /* indexed by asic type */
static int max_segment_number = 0; /* Default highest segment number */
static int max_pcibus_number = 255; /* Default highest pci bus number */
static int max_segment_number; /* Default highest segment number */
static int max_pcibus_number = 255; /* Default highest pci bus number */
/*
* Hooks and struct for unsupported pci providers
@ -84,7 +88,6 @@ static inline u64
sal_get_device_dmaflush_list(u64 nasid, u64 widget_num, u64 device_num,
u64 address)
{
struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
@ -94,7 +97,6 @@ sal_get_device_dmaflush_list(u64 nasid, u64 widget_num, u64 device_num,
(u64) nasid, (u64) widget_num,
(u64) device_num, (u64) address, 0, 0, 0);
return ret_stuff.status;
}
/*
@ -102,7 +104,6 @@ sal_get_device_dmaflush_list(u64 nasid, u64 widget_num, u64 device_num,
*/
static inline u64 sal_get_hubdev_info(u64 handle, u64 address)
{
struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
@ -118,7 +119,6 @@ static inline u64 sal_get_hubdev_info(u64 handle, u64 address)
*/
static inline u64 sal_get_pcibus_info(u64 segment, u64 busnum, u64 address)
{
struct ia64_sal_retval ret_stuff;
ret_stuff.status = 0;
ret_stuff.v0 = 0;
@ -215,7 +215,7 @@ static void __init sn_fixup_ionodes(void)
struct hubdev_info *hubdev;
u64 status;
u64 nasid;
int i, widget, device;
int i, widget, device, size;
/*
* Get SGI Specific HUB chipset information.
@ -251,48 +251,37 @@ static void __init sn_fixup_ionodes(void)
if (!hubdev->hdi_flush_nasid_list.widget_p)
continue;
size = (HUB_WIDGET_ID_MAX + 1) *
sizeof(struct sn_flush_device_kernel *);
hubdev->hdi_flush_nasid_list.widget_p =
kmalloc((HUB_WIDGET_ID_MAX + 1) *
sizeof(struct sn_flush_device_kernel *),
GFP_KERNEL);
memset(hubdev->hdi_flush_nasid_list.widget_p, 0x0,
(HUB_WIDGET_ID_MAX + 1) *
sizeof(struct sn_flush_device_kernel *));
kzalloc(size, GFP_KERNEL);
if (!hubdev->hdi_flush_nasid_list.widget_p)
BUG();
for (widget = 0; widget <= HUB_WIDGET_ID_MAX; widget++) {
sn_flush_device_kernel = kmalloc(DEV_PER_WIDGET *
sizeof(struct
sn_flush_device_kernel),
GFP_KERNEL);
size = DEV_PER_WIDGET *
sizeof(struct sn_flush_device_kernel);
sn_flush_device_kernel = kzalloc(size, GFP_KERNEL);
if (!sn_flush_device_kernel)
BUG();
memset(sn_flush_device_kernel, 0x0,
DEV_PER_WIDGET *
sizeof(struct sn_flush_device_kernel));
dev_entry = sn_flush_device_kernel;
for (device = 0; device < DEV_PER_WIDGET;
device++,dev_entry++) {
dev_entry->common = kmalloc(sizeof(struct
sn_flush_device_common),
GFP_KERNEL);
size = sizeof(struct sn_flush_device_common);
dev_entry->common = kzalloc(size, GFP_KERNEL);
if (!dev_entry->common)
BUG();
memset(dev_entry->common, 0x0, sizeof(struct
sn_flush_device_common));
if (sn_prom_feature_available(
PRF_DEVICE_FLUSH_LIST))
status = sal_get_device_dmaflush_list(
nasid,
widget,
device,
(u64)(dev_entry->common));
nasid, widget, device,
(u64)(dev_entry->common));
else
status = sn_device_fixup_war(nasid,
widget,
device,
dev_entry->common);
widget, device,
dev_entry->common);
if (status != SALRET_OK)
panic("SAL call failed: %s\n",
ia64_sal_strerror(status));
@ -383,13 +372,12 @@ void sn_pci_fixup_slot(struct pci_dev *dev)
pci_dev_get(dev); /* for the sysdata pointer */
pcidev_info = kzalloc(sizeof(struct pcidev_info), GFP_KERNEL);
if (pcidev_info <= 0)
if (!pcidev_info)
BUG(); /* Cannot afford to run out of memory */
sn_irq_info = kmalloc(sizeof(struct sn_irq_info), GFP_KERNEL);
if (sn_irq_info <= 0)
sn_irq_info = kzalloc(sizeof(struct sn_irq_info), GFP_KERNEL);
if (!sn_irq_info)
BUG(); /* Cannot afford to run out of memory */
memset(sn_irq_info, 0, sizeof(struct sn_irq_info));
/* Call to retrieve pci device information needed by kernel. */
status = sal_get_pcidev_info((u64) segment, (u64) dev->bus->number,
@ -482,13 +470,13 @@ void sn_pci_fixup_slot(struct pci_dev *dev)
*/
void sn_pci_controller_fixup(int segment, int busnum, struct pci_bus *bus)
{
int status = 0;
int status;
int nasid, cnode;
struct pci_controller *controller;
struct sn_pci_controller *sn_controller;
struct pcibus_bussoft *prom_bussoft_ptr;
struct hubdev_info *hubdev_info;
void *provider_soft = NULL;
void *provider_soft;
struct sn_pcibus_provider *provider;
status = sal_get_pcibus_info((u64) segment, (u64) busnum,
@ -535,6 +523,8 @@ void sn_pci_controller_fixup(int segment, int busnum, struct pci_bus *bus)
bus->sysdata = controller;
if (provider->bus_fixup)
provider_soft = (*provider->bus_fixup) (prom_bussoft_ptr, controller);
else
provider_soft = NULL;
if (provider_soft == NULL) {
/* fixup failed or not applicable */
@ -638,13 +628,8 @@ void sn_bus_free_sysdata(void)
static int __init sn_pci_init(void)
{
int i = 0;
int j = 0;
int i, j;
struct pci_dev *pci_dev = NULL;
extern void sn_init_cpei_timer(void);
#ifdef CONFIG_PROC_FS
extern void register_sn_procfs(void);
#endif
if (!ia64_platform_is("sn2") || IS_RUNNING_ON_FAKE_PROM())
return 0;
@ -700,32 +685,29 @@ static int __init sn_pci_init(void)
*/
void hubdev_init_node(nodepda_t * npda, cnodeid_t node)
{
struct hubdev_info *hubdev_info;
int size;
pg_data_t *pg;
size = sizeof(struct hubdev_info);
if (node >= num_online_nodes()) /* Headless/memless IO nodes */
hubdev_info =
(struct hubdev_info *)alloc_bootmem_node(NODE_DATA(0),
sizeof(struct
hubdev_info));
pg = NODE_DATA(0);
else
hubdev_info =
(struct hubdev_info *)alloc_bootmem_node(NODE_DATA(node),
sizeof(struct
hubdev_info));
npda->pdinfo = (void *)hubdev_info;
pg = NODE_DATA(node);
hubdev_info = (struct hubdev_info *)alloc_bootmem_node(pg, size);
npda->pdinfo = (void *)hubdev_info;
}
geoid_t
cnodeid_get_geoid(cnodeid_t cnode)
{
struct hubdev_info *hubdev;
hubdev = (struct hubdev_info *)(NODEPDA(cnode)->pdinfo);
return hubdev->hdi_geoid;
}
subsys_initcall(sn_pci_init);
@ -734,3 +716,4 @@ EXPORT_SYMBOL(sn_pci_unfixup_slot);
EXPORT_SYMBOL(sn_pci_controller_fixup);
EXPORT_SYMBOL(sn_bus_store_sysdata);
EXPORT_SYMBOL(sn_bus_free_sysdata);
EXPORT_SYMBOL(sn_pcidev_info_get);

View File

@ -75,7 +75,7 @@ EXPORT_SYMBOL(sn_rtc_cycles_per_second);
DEFINE_PER_CPU(struct sn_hub_info_s, __sn_hub_info);
EXPORT_PER_CPU_SYMBOL(__sn_hub_info);
DEFINE_PER_CPU(short, __sn_cnodeid_to_nasid[MAX_NUMNODES]);
DEFINE_PER_CPU(short, __sn_cnodeid_to_nasid[MAX_COMPACT_NODES]);
EXPORT_PER_CPU_SYMBOL(__sn_cnodeid_to_nasid);
DEFINE_PER_CPU(struct nodepda_s *, __sn_nodepda);
@ -317,6 +317,7 @@ struct pcdp_vga_device {
#define PCDP_PCI_TRANS_IOPORT 0x02
#define PCDP_PCI_TRANS_MMIO 0x01
#if defined(CONFIG_VT) && defined(CONFIG_VGA_CONSOLE)
static void
sn_scan_pcdp(void)
{
@ -358,6 +359,7 @@ sn_scan_pcdp(void)
break; /* once we find the primary, we're done */
}
}
#endif
static unsigned long sn2_rtc_initial;

View File

@ -3,7 +3,7 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 1999,2001-2004 Silicon Graphics, Inc. All Rights Reserved.
* Copyright (C) 1999,2001-2004, 2006 Silicon Graphics, Inc. All Rights Reserved.
*
* Module to export the system's Firmware Interface Tables, including
* PROM revision numbers and banners, in /proc
@ -190,7 +190,7 @@ static int
read_version_entry(char *page, char **start, off_t off, int count, int *eof,
void *data)
{
int len = 0;
int len;
/* data holds the NASID of the node */
len = dump_version(page, (unsigned long)data);
@ -202,7 +202,7 @@ static int
read_fit_entry(char *page, char **start, off_t off, int count, int *eof,
void *data)
{
int len = 0;
int len;
/* data holds the NASID of the node */
len = dump_fit(page, (unsigned long)data);
@ -229,13 +229,16 @@ int __init prominfo_init(void)
struct proc_dir_entry *p;
cnodeid_t cnodeid;
unsigned long nasid;
int size;
char name[NODE_NAME_LEN];
if (!ia64_platform_is("sn2"))
return 0;
proc_entries = kmalloc(num_online_nodes() * sizeof(struct proc_dir_entry *),
GFP_KERNEL);
size = num_online_nodes() * sizeof(struct proc_dir_entry *);
proc_entries = kzalloc(size, GFP_KERNEL);
if (!proc_entries)
return -ENOMEM;
sgi_prominfo_entry = proc_mkdir("sgi_prominfo", NULL);
@ -244,14 +247,12 @@ int __init prominfo_init(void)
sprintf(name, "node%d", cnodeid);
*entp = proc_mkdir(name, sgi_prominfo_entry);
nasid = cnodeid_to_nasid(cnodeid);
p = create_proc_read_entry(
"fit", 0, *entp, read_fit_entry,
(void *)nasid);
p = create_proc_read_entry("fit", 0, *entp, read_fit_entry,
(void *)nasid);
if (p)
p->owner = THIS_MODULE;
p = create_proc_read_entry(
"version", 0, *entp, read_version_entry,
(void *)nasid);
p = create_proc_read_entry("version", 0, *entp,
read_version_entry, (void *)nasid);
if (p)
p->owner = THIS_MODULE;
entp++;
@ -263,7 +264,7 @@ int __init prominfo_init(void)
void __exit prominfo_exit(void)
{
struct proc_dir_entry **entp;
unsigned cnodeid;
unsigned int cnodeid;
char name[NODE_NAME_LEN];
entp = proc_entries;

View File

@ -46,8 +46,14 @@ DECLARE_PER_CPU(struct ptc_stats, ptcstats);
static __cacheline_aligned DEFINE_SPINLOCK(sn2_global_ptc_lock);
void sn2_ptc_deadlock_recovery(short *, short, short, int, volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long);
extern unsigned long
sn2_ptc_deadlock_recovery_core(volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long);
void
sn2_ptc_deadlock_recovery(short *, short, short, int,
volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long);
/*
* Note: some is the following is captured here to make degugging easier
@ -59,16 +65,6 @@ void sn2_ptc_deadlock_recovery(short *, short, short, int, volatile unsigned lon
#define reset_max_active_on_deadlock() 1
#define PTC_LOCK(sh1) ((sh1) ? &sn2_global_ptc_lock : &sn_nodepda->ptc_lock)
static inline void ptc_lock(int sh1, unsigned long *flagp)
{
spin_lock_irqsave(PTC_LOCK(sh1), *flagp);
}
static inline void ptc_unlock(int sh1, unsigned long flags)
{
spin_unlock_irqrestore(PTC_LOCK(sh1), flags);
}
struct ptc_stats {
unsigned long ptc_l;
unsigned long change_rid;
@ -82,6 +78,8 @@ struct ptc_stats {
unsigned long shub_ptc_flushes_not_my_mm;
};
#define sn2_ptctest 0
static inline unsigned long wait_piowc(void)
{
volatile unsigned long *piows;
@ -200,7 +198,7 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
max_active = max_active_pio(shub1);
itc = ia64_get_itc();
ptc_lock(shub1, &flags);
spin_lock_irqsave(PTC_LOCK(shub1), flags);
itc2 = ia64_get_itc();
__get_cpu_var(ptcstats).lock_itc_clocks += itc2 - itc;
@ -258,7 +256,7 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
ia64_srlz_d();
}
ptc_unlock(shub1, flags);
spin_unlock_irqrestore(PTC_LOCK(shub1), flags);
preempt_enable();
}
@ -270,11 +268,12 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
* TLB flush transaction. The recovery sequence is somewhat tricky & is
* coded in assembly language.
*/
void sn2_ptc_deadlock_recovery(short *nasids, short ib, short ie, int mynasid, volatile unsigned long *ptc0, unsigned long data0,
volatile unsigned long *ptc1, unsigned long data1)
void
sn2_ptc_deadlock_recovery(short *nasids, short ib, short ie, int mynasid,
volatile unsigned long *ptc0, unsigned long data0,
volatile unsigned long *ptc1, unsigned long data1)
{
extern unsigned long sn2_ptc_deadlock_recovery_core(volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long, volatile unsigned long *, unsigned long);
short nasid, i;
unsigned long *piows, zeroval, n;

View File

@ -6,11 +6,11 @@
* Copyright (C) 2000-2005 Silicon Graphics, Inc. All rights reserved.
*/
#include <linux/config.h>
#include <asm/uaccess.h>
#ifdef CONFIG_PROC_FS
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <asm/uaccess.h>
#include <asm/sn/sn_sal.h>
static int partition_id_show(struct seq_file *s, void *p)
@ -90,10 +90,10 @@ static int coherence_id_open(struct inode *inode, struct file *file)
return single_open(file, coherence_id_show, NULL);
}
static struct proc_dir_entry *sn_procfs_create_entry(
const char *name, struct proc_dir_entry *parent,
int (*openfunc)(struct inode *, struct file *),
int (*releasefunc)(struct inode *, struct file *))
static struct proc_dir_entry
*sn_procfs_create_entry(const char *name, struct proc_dir_entry *parent,
int (*openfunc)(struct inode *, struct file *),
int (*releasefunc)(struct inode *, struct file *))
{
struct proc_dir_entry *e = create_proc_entry(name, 0444, parent);
@ -126,24 +126,24 @@ void register_sn_procfs(void)
return;
sn_procfs_create_entry("partition_id", sgi_proc_dir,
partition_id_open, single_release);
partition_id_open, single_release);
sn_procfs_create_entry("system_serial_number", sgi_proc_dir,
system_serial_number_open, single_release);
system_serial_number_open, single_release);
sn_procfs_create_entry("licenseID", sgi_proc_dir,
licenseID_open, single_release);
licenseID_open, single_release);
e = sn_procfs_create_entry("sn_force_interrupt", sgi_proc_dir,
sn_force_interrupt_open, single_release);
sn_force_interrupt_open, single_release);
if (e)
e->proc_fops->write = sn_force_interrupt_write_proc;
sn_procfs_create_entry("coherence_id", sgi_proc_dir,
coherence_id_open, single_release);
coherence_id_open, single_release);
sn_procfs_create_entry("sn_topology", sgi_proc_dir,
sn_topology_open, sn_topology_release);
sn_topology_open, sn_topology_release);
}
#endif /* CONFIG_PROC_FS */

View File

@ -14,6 +14,7 @@
#include <asm/hw_irq.h>
#include <asm/system.h>
#include <asm/timex.h>
#include <asm/sn/leds.h>
#include <asm/sn/shub_mmr.h>
@ -28,9 +29,27 @@ static struct time_interpolator sn2_interpolator = {
.source = TIME_SOURCE_MMIO64
};
/*
* sn udelay uses the RTC instead of the ITC because the ITC is not
* synchronized across all CPUs, and the thread may migrate to another CPU
* if preemption is enabled.
*/
static void
ia64_sn_udelay (unsigned long usecs)
{
unsigned long start = rtc_time();
unsigned long end = start +
usecs * sn_rtc_cycles_per_second / 1000000;
while (time_before((unsigned long)rtc_time(), end))
cpu_relax();
}
void __init sn_timer_init(void)
{
sn2_interpolator.frequency = sn_rtc_cycles_per_second;
sn2_interpolator.addr = RTC_COUNTER_ADDR;
register_time_interpolator(&sn2_interpolator);
ia64_udelay = &ia64_sn_udelay;
}

View File

@ -1,7 +1,7 @@
/*
*
*
* Copyright (c) 2005 Silicon Graphics, Inc. All Rights Reserved.
* Copyright (c) 2005, 2006 Silicon Graphics, Inc. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of version 2 of the GNU General Public License
@ -22,11 +22,6 @@
* License along with this program; if not, write the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
*
* Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
* Mountain View, CA 94043, or:
*
* http://www.sgi.com
*
* For further information regarding this notice, see:
*
* http://oss.sgi.com/projects/GenInfo/NoticeExplan

View File

@ -284,12 +284,10 @@ struct sn_irq_info *tiocx_irq_alloc(nasid_t nasid, int widget, int irq,
if ((nasid & 1) == 0)
return NULL;
sn_irq_info = kmalloc(sn_irq_size, GFP_KERNEL);
sn_irq_info = kzalloc(sn_irq_size, GFP_KERNEL);
if (sn_irq_info == NULL)
return NULL;
memset(sn_irq_info, 0x0, sn_irq_size);
status = tiocx_intr_alloc(nasid, widget, __pa(sn_irq_info), irq,
req_nasid, slice);
if (status) {

View File

@ -738,7 +738,9 @@ xpc_process_disconnect(struct xpc_channel *ch, unsigned long *irq_flags)
/* make sure all activity has settled down first */
if (atomic_read(&ch->references) > 0) {
if (atomic_read(&ch->references) > 0 ||
((ch->flags & XPC_C_CONNECTEDCALLOUT_MADE) &&
!(ch->flags & XPC_C_DISCONNECTINGCALLOUT_MADE))) {
return;
}
DBUG_ON(atomic_read(&ch->kthreads_assigned) != 0);
@ -775,7 +777,7 @@ xpc_process_disconnect(struct xpc_channel *ch, unsigned long *irq_flags)
/* both sides are disconnected now */
if (ch->flags & XPC_C_CONNECTCALLOUT) {
if (ch->flags & XPC_C_DISCONNECTINGCALLOUT_MADE) {
spin_unlock_irqrestore(&ch->lock, *irq_flags);
xpc_disconnect_callout(ch, xpcDisconnected);
spin_lock_irqsave(&ch->lock, *irq_flags);
@ -1300,7 +1302,7 @@ xpc_process_msg_IPI(struct xpc_partition *part, int ch_number)
"delivered=%d, partid=%d, channel=%d\n",
nmsgs_sent, ch->partid, ch->number);
if (ch->flags & XPC_C_CONNECTCALLOUT) {
if (ch->flags & XPC_C_CONNECTEDCALLOUT_MADE) {
xpc_activate_kthreads(ch, nmsgs_sent);
}
}

View File

@ -750,12 +750,16 @@ xpc_daemonize_kthread(void *args)
/* let registerer know that connection has been established */
spin_lock_irqsave(&ch->lock, irq_flags);
if (!(ch->flags & XPC_C_CONNECTCALLOUT)) {
ch->flags |= XPC_C_CONNECTCALLOUT;
if (!(ch->flags & XPC_C_CONNECTEDCALLOUT)) {
ch->flags |= XPC_C_CONNECTEDCALLOUT;
spin_unlock_irqrestore(&ch->lock, irq_flags);
xpc_connected_callout(ch);
spin_lock_irqsave(&ch->lock, irq_flags);
ch->flags |= XPC_C_CONNECTEDCALLOUT_MADE;
spin_unlock_irqrestore(&ch->lock, irq_flags);
/*
* It is possible that while the callout was being
* made that the remote partition sent some messages.
@ -777,15 +781,17 @@ xpc_daemonize_kthread(void *args)
if (atomic_dec_return(&ch->kthreads_assigned) == 0) {
spin_lock_irqsave(&ch->lock, irq_flags);
if ((ch->flags & XPC_C_CONNECTCALLOUT) &&
!(ch->flags & XPC_C_DISCONNECTCALLOUT)) {
ch->flags |= XPC_C_DISCONNECTCALLOUT;
if ((ch->flags & XPC_C_CONNECTEDCALLOUT_MADE) &&
!(ch->flags & XPC_C_DISCONNECTINGCALLOUT)) {
ch->flags |= XPC_C_DISCONNECTINGCALLOUT;
spin_unlock_irqrestore(&ch->lock, irq_flags);
xpc_disconnect_callout(ch, xpcDisconnecting);
} else {
spin_unlock_irqrestore(&ch->lock, irq_flags);
spin_lock_irqsave(&ch->lock, irq_flags);
ch->flags |= XPC_C_DISCONNECTINGCALLOUT_MADE;
}
spin_unlock_irqrestore(&ch->lock, irq_flags);
if (atomic_dec_return(&part->nchannels_engaged) == 0) {
xpc_mark_partition_disengaged(part);
xpc_IPI_send_disengage(part);

View File

@ -335,10 +335,10 @@ int sn_pci_legacy_read(struct pci_bus *bus, u16 port, u32 *val, u8 size)
*/
SAL_CALL(isrv, SN_SAL_IOIF_PCI_SAFE,
pci_domain_nr(bus), bus->number,
0, /* io */
0, /* read */
port, size, __pa(val));
pci_domain_nr(bus), bus->number,
0, /* io */
0, /* read */
port, size, __pa(val));
if (isrv.status == 0)
return size;
@ -381,10 +381,10 @@ int sn_pci_legacy_write(struct pci_bus *bus, u16 port, u32 val, u8 size)
*/
SAL_CALL(isrv, SN_SAL_IOIF_PCI_SAFE,
pci_domain_nr(bus), bus->number,
0, /* io */
1, /* write */
port, size, __pa(&val));
pci_domain_nr(bus), bus->number,
0, /* io */
1, /* write */
port, size, __pa(&val));
if (isrv.status == 0)
return size;

View File

@ -3,7 +3,7 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 2001-2004 Silicon Graphics, Inc. All rights reserved.
* Copyright (C) 2001-2006 Silicon Graphics, Inc. All rights reserved.
*/
#include <linux/types.h>
@ -12,7 +12,7 @@
#include <asm/sn/pcibus_provider_defs.h>
#include <asm/sn/pcidev.h>
int pcibr_invalidate_ate = 0; /* by default don't invalidate ATE on free */
int pcibr_invalidate_ate; /* by default don't invalidate ATE on free */
/*
* mark_ate: Mark the ate as either free or inuse.
@ -20,14 +20,12 @@ int pcibr_invalidate_ate = 0; /* by default don't invalidate ATE on free */
static void mark_ate(struct ate_resource *ate_resource, int start, int number,
u64 value)
{
u64 *ate = ate_resource->ate;
int index;
int length = 0;
for (index = start; length < number; index++, length++)
ate[index] = value;
}
/*
@ -37,7 +35,6 @@ static void mark_ate(struct ate_resource *ate_resource, int start, int number,
static int find_free_ate(struct ate_resource *ate_resource, int start,
int count)
{
u64 *ate = ate_resource->ate;
int index;
int start_free;
@ -70,12 +67,10 @@ static int find_free_ate(struct ate_resource *ate_resource, int start,
static inline void free_ate_resource(struct ate_resource *ate_resource,
int start)
{
mark_ate(ate_resource, start, ate_resource->ate[start], 0);
if ((ate_resource->lowest_free_index > start) ||
(ate_resource->lowest_free_index < 0))
ate_resource->lowest_free_index = start;
}
/*
@ -84,7 +79,6 @@ static inline void free_ate_resource(struct ate_resource *ate_resource,
static inline int alloc_ate_resource(struct ate_resource *ate_resource,
int ate_needed)
{
int start_index;
/*
@ -118,19 +112,12 @@ static inline int alloc_ate_resource(struct ate_resource *ate_resource,
*/
int pcibr_ate_alloc(struct pcibus_info *pcibus_info, int count)
{
int status = 0;
u64 flag;
int status;
unsigned long flags;
flag = pcibr_lock(pcibus_info);
spin_lock_irqsave(&pcibus_info->pbi_lock, flags);
status = alloc_ate_resource(&pcibus_info->pbi_int_ate_resource, count);
if (status < 0) {
/* Failed to allocate */
pcibr_unlock(pcibus_info, flag);
return -1;
}
pcibr_unlock(pcibus_info, flag);
spin_unlock_irqrestore(&pcibus_info->pbi_lock, flags);
return status;
}
@ -182,7 +169,7 @@ void pcibr_ate_free(struct pcibus_info *pcibus_info, int index)
ate_write(pcibus_info, index, count, (ate & ~PCI32_ATE_V));
}
flags = pcibr_lock(pcibus_info);
spin_lock_irqsave(&pcibus_info->pbi_lock, flags);
free_ate_resource(&pcibus_info->pbi_int_ate_resource, index);
pcibr_unlock(pcibus_info, flags);
spin_unlock_irqrestore(&pcibus_info->pbi_lock, flags);
}

View File

@ -137,14 +137,12 @@ pcibr_dmatrans_direct64(struct pcidev_info * info, u64 paddr,
pci_addr |= PCI64_ATTR_VIRTUAL;
return pci_addr;
}
static dma_addr_t
pcibr_dmatrans_direct32(struct pcidev_info * info,
u64 paddr, size_t req_size, u64 flags)
{
struct pcidev_info *pcidev_info = info->pdi_host_pcidev_info;
struct pcibus_info *pcibus_info = (struct pcibus_info *)pcidev_info->
pdi_pcibus_info;
@ -171,7 +169,6 @@ pcibr_dmatrans_direct32(struct pcidev_info * info,
}
return PCI32_DIRECT_BASE | offset;
}
/*
@ -218,9 +215,8 @@ void sn_dma_flush(u64 addr)
u64 flags;
u64 itte;
struct hubdev_info *hubinfo;
volatile struct sn_flush_device_kernel *p;
volatile struct sn_flush_device_common *common;
struct sn_flush_device_kernel *p;
struct sn_flush_device_common *common;
struct sn_flush_nasid_entry *flush_nasid_list;
if (!sn_ioif_inited)
@ -310,8 +306,7 @@ void sn_dma_flush(u64 addr)
(common->sfdl_slot - 1));
}
} else {
spin_lock_irqsave((spinlock_t *)&p->sfdl_flush_lock,
flags);
spin_lock_irqsave(&p->sfdl_flush_lock, flags);
*common->sfdl_flush_addr = 0;
/* force an interrupt. */
@ -322,8 +317,7 @@ void sn_dma_flush(u64 addr)
cpu_relax();
/* okay, everything is synched up. */
spin_unlock_irqrestore((spinlock_t *)&p->sfdl_flush_lock,
flags);
spin_unlock_irqrestore(&p->sfdl_flush_lock, flags);
}
return;
}

View File

@ -163,9 +163,12 @@ pcibr_bus_fixup(struct pcibus_bussoft *prom_bussoft, struct pci_controller *cont
/* Setup the PMU ATE map */
soft->pbi_int_ate_resource.lowest_free_index = 0;
soft->pbi_int_ate_resource.ate =
kmalloc(soft->pbi_int_ate_size * sizeof(u64), GFP_KERNEL);
memset(soft->pbi_int_ate_resource.ate, 0,
(soft->pbi_int_ate_size * sizeof(u64)));
kzalloc(soft->pbi_int_ate_size * sizeof(u64), GFP_KERNEL);
if (!soft->pbi_int_ate_resource.ate) {
kfree(soft);
return NULL;
}
if (prom_bussoft->bs_asic_type == PCIIO_ASIC_TYPE_TIOCP) {
/* TIO PCI Bridge: find nearest node with CPUs */

View File

@ -29,28 +29,7 @@
/*
* sys_tas() - test-and-set
* linuxthreads testing version
*/
#ifndef CONFIG_SMP
asmlinkage int sys_tas(int *addr)
{
int oldval;
unsigned long flags;
if (!access_ok(VERIFY_WRITE, addr, sizeof (int)))
return -EFAULT;
local_irq_save(flags);
oldval = *addr;
if (!oldval)
*addr = 1;
local_irq_restore(flags);
return oldval;
}
#else /* CONFIG_SMP */
#include <linux/spinlock.h>
static DEFINE_SPINLOCK(tas_lock);
asmlinkage int sys_tas(int *addr)
{
int oldval;
@ -58,15 +37,43 @@ asmlinkage int sys_tas(int *addr)
if (!access_ok(VERIFY_WRITE, addr, sizeof (int)))
return -EFAULT;
_raw_spin_lock(&tas_lock);
oldval = *addr;
if (!oldval)
*addr = 1;
_raw_spin_unlock(&tas_lock);
/* atomic operation:
* oldval = *addr; *addr = 1;
*/
__asm__ __volatile__ (
DCACHE_CLEAR("%0", "r4", "%1")
" .fillinsn\n"
"1:\n"
" lock %0, @%1 -> unlock %2, @%1\n"
"2:\n"
/* NOTE:
* The m32r processor can accept interrupts only
* at the 32-bit instruction boundary.
* So, in the above code, the "unlock" instruction
* can be executed continuously after the "lock"
* instruction execution without any interruptions.
*/
".section .fixup,\"ax\"\n"
" .balign 4\n"
"3: ldi %0, #%3\n"
" seth r14, #high(2b)\n"
" or3 r14, r14, #low(2b)\n"
" jmp r14\n"
".previous\n"
".section __ex_table,\"a\"\n"
" .balign 4\n"
" .long 1b,3b\n"
".previous\n"
: "=&r" (oldval)
: "r" (addr), "r" (1), "i"(-EFAULT)
: "r14", "memory"
#ifdef CONFIG_CHIP_M32700_TS1
, "r4"
#endif /* CONFIG_CHIP_M32700_TS1 */
);
return oldval;
}
#endif /* CONFIG_SMP */
/*
* sys_pipe() is the normal C calling standard for creating

View File

@ -21,6 +21,10 @@ config GENERIC_CALIBRATE_DELAY
bool
default y
config TIME_LOW_RES
bool
default y
config ARCH_MAY_HAVE_PC_FDC
bool
depends on Q40 || (BROKEN && SUN3X)

View File

@ -131,9 +131,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|BINDEC idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -60,9 +60,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|BINSTR idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -152,9 +152,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|BUGFIX idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -69,9 +69,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|DECBIN idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -22,9 +22,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
DO_FUNC: |idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -5,9 +5,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
| fpsp.h --- stack frame offsets during FPSP exception handling
|

View File

@ -29,9 +29,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
GEN_EXCEPT: |idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -54,9 +54,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
GET_OP: |idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -12,9 +12,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
KERNEL_EX: |idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -16,9 +16,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
RES_FUNC: |idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -8,9 +8,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|ROUND idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -38,9 +38,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|SACOS idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -38,9 +38,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|SASIN idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -43,9 +43,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|satan idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -45,9 +45,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|satanh idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -21,9 +21,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|SCALE idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -49,9 +49,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|SCOSH idnt 2,1 | Motorola 040 Floating Point Software Package

View File

@ -331,9 +331,8 @@
| Copyright (C) Motorola, Inc. 1990
| All Rights Reserved
|
| THIS IS UNPUBLISHED PROPRIETARY SOURCE CODE OF MOTOROLA
| The copyright notice above does not evidence any
| actual or intended publication of such source code.
| For details on the license for this file, please see the
| file, README, in this same directory.
|setox idnt 2,1 | Motorola 040 Floating Point Software Package

Some files were not shown because too many files have changed in this diff Show More