Merge with rsync://rsync.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git

This commit is contained in:
Steve French 2005-06-23 19:03:20 -07:00
commit fa5cfae377
443 changed files with 11324 additions and 4317 deletions

View file

@ -111,6 +111,7 @@ mkdep
mktables
modpost
modversions.h*
offset.h
offsets.h
oui.c*
parse.c*

View file

@ -66,6 +66,14 @@ Who: Paul E. McKenney <paulmck@us.ibm.com>
---------------------------
What: remove verify_area()
When: July 2006
Files: Various uaccess.h headers.
Why: Deprecated and redundant. access_ok() should be used instead.
Who: Jesper Juhl <juhl-lkml@dif.dk>
---------------------------
What: IEEE1394 Audio and Music Data Transmission Protocol driver,
Connection Management Procedures driver
When: November 2005
@ -86,6 +94,16 @@ Who: Jody McIntyre <scjody@steamballoon.com>
---------------------------
What: register_serial/unregister_serial
When: December 2005
Why: This interface does not allow serial ports to be registered against
a struct device, and as such does not allow correct power management
of such ports. 8250-based ports should use serial8250_register_port
and serial8250_unregister_port instead.
Who: Russell King <rmk@arm.linux.org.uk>
---------------------------
What: i2c sysfs name change: in1_ref, vid deprecated in favour of cpu0_vid
When: November 2005
Files: drivers/i2c/chips/adm1025.c, drivers/i2c/chips/adm1026.c

View file

@ -304,57 +304,6 @@ tcp_low_latency - BOOLEAN
changed would be a Beowulf compute cluster.
Default: 0
tcp_westwood - BOOLEAN
Enable TCP Westwood+ congestion control algorithm.
TCP Westwood+ is a sender-side only modification of the TCP Reno
protocol stack that optimizes the performance of TCP congestion
control. It is based on end-to-end bandwidth estimation to set
congestion window and slow start threshold after a congestion
episode. Using this estimation, TCP Westwood+ adaptively sets a
slow start threshold and a congestion window which takes into
account the bandwidth used at the time congestion is experienced.
TCP Westwood+ significantly increases fairness wrt TCP Reno in
wired networks and throughput over wireless links.
Default: 0
tcp_vegas_cong_avoid - BOOLEAN
Enable TCP Vegas congestion avoidance algorithm.
TCP Vegas is a sender-side only change to TCP that anticipates
the onset of congestion by estimating the bandwidth. TCP Vegas
adjusts the sending rate by modifying the congestion
window. TCP Vegas should provide less packet loss, but it is
not as aggressive as TCP Reno.
Default:0
tcp_bic - BOOLEAN
Enable BIC TCP congestion control algorithm.
BIC-TCP is a sender-side only change that ensures a linear RTT
fairness under large windows while offering both scalability and
bounded TCP-friendliness. The protocol combines two schemes
called additive increase and binary search increase. When the
congestion window is large, additive increase with a large
increment ensures linear RTT fairness as well as good
scalability. Under small congestion windows, binary search
increase provides TCP friendliness.
Default: 0
tcp_bic_low_window - INTEGER
Sets the threshold window (in packets) where BIC TCP starts to
adjust the congestion window. Below this threshold BIC TCP behaves
the same as the default TCP Reno.
Default: 14
tcp_bic_fast_convergence - BOOLEAN
Forces BIC TCP to more quickly respond to changes in congestion
window. Allows two flows sharing the same connection to converge
more rapidly.
Default: 1
tcp_default_win_scale - INTEGER
Sets the minimum window scale TCP will negotiate for on all
conections.
Default: 7
tcp_tso_win_divisor - INTEGER
This allows control over what percentage of the congestion window
can be consumed by a single TSO frame.
@ -368,6 +317,11 @@ tcp_frto - BOOLEAN
where packet loss is typically due to random radio interference
rather than intermediate router congestion.
tcp_congestion_control - STRING
Set the congestion control algorithm to be used for new
connections. The algorithm "reno" is always available, but
additional choices may be available based on kernel configuration.
somaxconn - INTEGER
Limit of socket listen() backlog, known in userspace as SOMAXCONN.
Defaults to 128. See also tcp_max_syn_backlog for additional tuning

View file

@ -1,5 +1,72 @@
How the new TCP output machine [nyi] works.
TCP protocol
============
Last updated: 21 June 2005
Contents
========
- Congestion control
- How the new TCP output machine [nyi] works
Congestion control
==================
The following variables are used in the tcp_sock for congestion control:
snd_cwnd The size of the congestion window
snd_ssthresh Slow start threshold. We are in slow start if
snd_cwnd is less than this.
snd_cwnd_cnt A counter used to slow down the rate of increase
once we exceed slow start threshold.
snd_cwnd_clamp This is the maximum size that snd_cwnd can grow to.
snd_cwnd_stamp Timestamp for when congestion window last validated.
snd_cwnd_used Used as a highwater mark for how much of the
congestion window is in use. It is used to adjust
snd_cwnd down when the link is limited by the
application rather than the network.
As of 2.6.13, Linux supports pluggable congestion control algorithms.
A congestion control mechanism can be registered through functions in
tcp_cong.c. The functions used by the congestion control mechanism are
registered via passing a tcp_congestion_ops struct to
tcp_register_congestion_control. As a minimum name, ssthresh,
cong_avoid, min_cwnd must be valid.
Private data for a congestion control mechanism is stored in tp->ca_priv.
tcp_ca(tp) returns a pointer to this space. This is preallocated space - it
is important to check the size of your private data will fit this space, or
alternatively space could be allocated elsewhere and a pointer to it could
be stored here.
There are three kinds of congestion control algorithms currently: The
simplest ones are derived from TCP reno (highspeed, scalable) and just
provide an alternative the congestion window calculation. More complex
ones like BIC try to look at other events to provide better
heuristics. There are also round trip time based algorithms like
Vegas and Westwood+.
Good TCP congestion control is a complex problem because the algorithm
needs to maintain fairness and performance. Please review current
research and RFC's before developing new modules.
The method that is used to determine which congestion control mechanism is
determined by the setting of the sysctl net.ipv4.tcp_congestion_control.
The default congestion control will be the last one registered (LIFO);
so if you built everything as modules. the default will be reno. If you
build with the default's from Kconfig, then BIC will be builtin (not a module)
and it will end up the default.
If you really want a particular default value then you will need
to set it with the sysctl. If you use a sysctl, the module will be autoloaded
if needed and you will get the expected protocol. If you ask for an
unknown congestion method, then the sysctl attempt will fail.
If you remove a tcp congestion control module, then you will get the next
available one. Since reno can not be built as a module, and can not be
deleted, it will always be available.
How the new TCP output machine [nyi] works.
===========================================
Data is kept on a single queue. The skb->users flag tells us if the frame is
one that has been queued already. To add a frame we throw it on the end. Ack

View file

@ -49,6 +49,7 @@ show up in /proc/sys/kernel:
- shmmax [ sysv ipc ]
- shmmni
- stop-a [ SPARC only ]
- suid_dumpable
- sysrq ==> Documentation/sysrq.txt
- tainted
- threads-max
@ -300,6 +301,25 @@ kernel. This value defaults to SHMMAX.
==============================================================
suid_dumpable:
This value can be used to query and set the core dump mode for setuid
or otherwise protected/tainted binaries. The modes are
0 - (default) - traditional behaviour. Any process which has changed
privilege levels or is execute only will not be dumped
1 - (debug) - all processes dump core when possible. The core dump is
owned by the current user and no security is applied. This is
intended for system debugging situations only. Ptrace is unchecked.
2 - (suidsafe) - any binary which normally would not be dumped is dumped
readable by root only. This allows the end user to remove
such a dump but not access it directly. For security reasons
core dumps in this mode will not overwrite one another or
other files. This mode is appropriate when adminstrators are
attempting to debug problems in a normal environment.
==============================================================
tainted:
Non-zero if the kernel has been tainted. Numeric values, which

View file

@ -22,7 +22,7 @@ copy of the structure. You must not re-register over the top of the line
discipline even with the same data or your computer again will be eaten by
demons.
In order to remove a line discipline call tty_register_ldisc passing NULL.
In order to remove a line discipline call tty_unregister_ldisc().
In ancient times this always worked. In modern times the function will
return -EBUSY if the ldisc is currently in use. Since the ldisc referencing
code manages the module counts this should not usually be a concern.

View file

@ -304,7 +304,7 @@ S: Maintained
ARM/PT DIGITAL BOARD PORT
P: Stefan Eletzhofer
M: stefan.eletzhofer@eletztrick.de
L: linux-arm-kernel@lists.arm.linux.org.uk
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
W: http://www.arm.linux.org.uk/
S: Maintained
@ -317,21 +317,21 @@ S: Maintained
ARM/STRONGARM110 PORT
P: Russell King
M: rmk@arm.linux.org.uk
L: linux-arm-kernel@lists.arm.linux.org.uk
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
W: http://www.arm.linux.org.uk/
S: Maintained
ARM/S3C2410 ARM ARCHITECTURE
P: Ben Dooks
M: ben-s3c2410@fluff.org
L: linux-arm-kernel@lists.arm.linux.org.uk
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
W: http://www.fluff.org/ben/linux/
S: Maintained
ARM/S3C2440 ARM ARCHITECTURE
P: Ben Dooks
M: ben-s3c2440@fluff.org
L: linux-arm-kernel@lists.arm.linux.org.uk
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
W: http://www.fluff.org/ben/linux/
S: Maintained
@ -504,6 +504,13 @@ L: bonding-devel@lists.sourceforge.net
W: http://sourceforge.net/projects/bonding/
S: Supported
BROADBAND PROCESSOR ARCHITECTURE
P: Arnd Bergmann
M: arnd@arndb.de
L: linuxppc64-dev@ozlabs.org
W: http://linuxppc64.org
S: Supported
BTTV VIDEO4LINUX DRIVER
P: Gerd Knorr
M: kraxel@bytesex.org
@ -1853,7 +1860,7 @@ S: Maintained
PXA2xx SUPPORT
P: Nicolas Pitre
M: nico@cam.org
L: linux-arm-kernel@lists.arm.linux.org.uk
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
S: Maintained
QLOGIC QLA2XXX FC-SCSI DRIVER
@ -2155,7 +2162,7 @@ SHARP LH SUPPORT (LH7952X & LH7A40X)
P: Marc Singer
M: elf@buici.com
W: http://projects.buici.com/arm
L: linux-arm-kernel@lists.arm.linux.org.uk
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
S: Maintained
SPARC (sparc32):

View file

@ -518,7 +518,7 @@ CFLAGS += $(call add-align,CONFIG_CC_ALIGN_LOOPS,-loops)
CFLAGS += $(call add-align,CONFIG_CC_ALIGN_JUMPS,-jumps)
ifdef CONFIG_FRAME_POINTER
CFLAGS += -fno-omit-frame-pointer
CFLAGS += -fno-omit-frame-pointer $(call cc-option,-fno-optimize-sibling-calls,)
else
CFLAGS += -fomit-frame-pointer
endif

View file

@ -509,7 +509,7 @@ config NR_CPUS
depends on SMP
default "64"
config DISCONTIGMEM
config ARCH_DISCONTIGMEM_ENABLE
bool "Discontiguous Memory Support (EXPERIMENTAL)"
depends on EXPERIMENTAL
help
@ -518,6 +518,8 @@ config DISCONTIGMEM
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
source "mm/Kconfig"
config NUMA
bool "NUMA Support (EXPERIMENTAL)"
depends on DISCONTIGMEM

View file

@ -96,7 +96,7 @@ CONFIG_ALPHA_CORE_AGP=y
CONFIG_ALPHA_BROKEN_IRQ_MASK=y
CONFIG_EISA=y
# CONFIG_SMP is not set
# CONFIG_DISCONTIGMEM is not set
# CONFIG_ARCH_DISCONTIGMEM_ENABLE is not set
CONFIG_VERBOSE_MCHECK=y
CONFIG_VERBOSE_MCHECK_ON=1
CONFIG_PCI_LEGACY_PROC=y

View file

@ -327,8 +327,6 @@ void __init mem_init(void)
extern char _text, _etext, _data, _edata;
extern char __init_begin, __init_end;
unsigned long nid, i;
struct page * lmem_map;
high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
reservedpages = 0;
@ -338,10 +336,10 @@ void __init mem_init(void)
*/
totalram_pages += free_all_bootmem_node(NODE_DATA(nid));
lmem_map = node_mem_map(nid);
pfn = NODE_DATA(nid)->node_start_pfn;
for (i = 0; i < node_spanned_pages(nid); i++, pfn++)
if (page_is_ram(pfn) && PageReserved(lmem_map+i))
if (page_is_ram(pfn) &&
PageReserved(nid_page_nr(nid, i)))
reservedpages++;
}
@ -373,18 +371,18 @@ show_mem(void)
show_free_areas();
printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
for_each_online_node(nid) {
struct page * lmem_map = node_mem_map(nid);
i = node_spanned_pages(nid);
while (i-- > 0) {
struct page *page = nid_page_nr(nid, i);
total++;
if (PageReserved(lmem_map+i))
if (PageReserved(page))
reserved++;
else if (PageSwapCache(lmem_map+i))
else if (PageSwapCache(page))
cached++;
else if (!page_count(lmem_map+i))
else if (!page_count(page))
free++;
else
shared += page_count(lmem_map + i) - 1;
shared += page_count(page) - 1;
}
}
printk("%ld pages of RAM\n",total);

View file

@ -346,7 +346,7 @@ config PREEMPT
Say Y here if you are building a kernel for a desktop, embedded
or real-time system. Say N if you are unsure.
config DISCONTIGMEM
config ARCH_DISCONTIGMEM_ENABLE
bool
default (ARCH_LH7A40X && !LH7A40X_CONTIGMEM)
help
@ -355,6 +355,8 @@ config DISCONTIGMEM
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
source "mm/Kconfig"
config LEDS
bool "Timer and CPU usage LEDs"
depends on ARCH_CDB89712 || ARCH_CO285 || ARCH_EBSA110 || \

View file

@ -21,8 +21,8 @@
#
# User may have a custom install script
if [ -x ~/bin/installkernel ]; then exec ~/bin/installkernel "$@"; fi
if [ -x /sbin/installkernel ]; then exec /sbin/installkernel "$@"; fi
if [ -x ~/bin/${CROSS_COMPILE}installkernel ]; then exec ~/bin/${CROSS_COMPILE}installkernel "$@"; fi
if [ -x /sbin/${CROSS_COMPILE}installkernel ]; then exec /sbin/${CROSS_COMPILE}installkernel "$@"; fi
if [ "$(basename $2)" = "zImage" ]; then
# Compressed install

View file

@ -1,14 +1,13 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.12-rc1-bk2
# Sun Mar 27 17:47:45 2005
# Linux kernel version: 2.6.12-git4
# Wed Jun 22 15:56:42 2005
#
CONFIG_ARM=y
CONFIG_MMU=y
CONFIG_UID16=y
CONFIG_RWSEM_GENERIC_SPINLOCK=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_IOMAP=y
#
# Code maturity level options
@ -17,6 +16,7 @@ CONFIG_EXPERIMENTAL=y
# CONFIG_CLEAN_COMPILE is not set
CONFIG_BROKEN=y
CONFIG_BROKEN_ON_SMP=y
CONFIG_INIT_ENV_ARG_LIMIT=32
#
# General setup
@ -35,6 +35,8 @@ CONFIG_KOBJECT_UEVENT=y
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
# CONFIG_KALLSYMS_EXTRA_PASS is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
@ -81,6 +83,7 @@ CONFIG_ARCH_S3C2410=y
# CONFIG_ARCH_VERSATILE is not set
# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_H720X is not set
# CONFIG_ARCH_AAEC2000 is not set
#
# S3C24XX Implementations
@ -134,6 +137,7 @@ CONFIG_CPU_TLB_V4WBI=y
#
# Bus support
#
CONFIG_ISA_DMA_API=y
#
# PCCARD (PCMCIA/CardBus) support
@ -143,7 +147,9 @@ CONFIG_CPU_TLB_V4WBI=y
#
# Kernel Features
#
# CONFIG_SMP is not set
# CONFIG_PREEMPT is not set
# CONFIG_DISCONTIGMEM is not set
CONFIG_ALIGNMENT_TRAP=y
#
@ -297,7 +303,6 @@ CONFIG_PARPORT_1284=y
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
# CONFIG_PARIDE is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
@ -359,6 +364,7 @@ CONFIG_BLK_DEV_IDE_BAST=y
#
# Fusion MPT device support
#
# CONFIG_FUSION is not set
#
# IEEE 1394 (FireWire) support
@ -378,10 +384,11 @@ CONFIG_NET=y
# Networking options
#
# CONFIG_PACKET is not set
# CONFIG_NETLINK_DEV is not set
CONFIG_UNIX=y
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_FIB_HASH=y
# CONFIG_IP_FIB_TRIE is not set
# CONFIG_IP_MULTICAST is not set
# CONFIG_IP_ADVANCED_ROUTER is not set
CONFIG_IP_PNP=y
@ -443,8 +450,9 @@ CONFIG_NETDEVICES=y
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
# CONFIG_MII is not set
CONFIG_MII=m
# CONFIG_SMC91X is not set
CONFIG_DM9000=m
#
# Ethernet (1000 Mbit)
@ -521,7 +529,6 @@ CONFIG_SERIO_SERPORT=y
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_GAMEPORT is not set
CONFIG_SOUND_GAMEPORT=y
#
# Character devices
@ -605,7 +612,6 @@ CONFIG_S3C2410_RTC=y
#
# TPM devices
#
# CONFIG_TCG_TPM is not set
#
# I2C support
@ -654,6 +660,7 @@ CONFIG_SENSORS_LM78=m
CONFIG_SENSORS_LM85=m
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
@ -665,6 +672,7 @@ CONFIG_SENSORS_LM85=m
#
# Other I2C Chip support
#
# CONFIG_SENSORS_DS1337 is not set
CONFIG_SENSORS_EEPROM=m
# CONFIG_SENSORS_PCF8574 is not set
# CONFIG_SENSORS_PCF8591 is not set
@ -696,8 +704,10 @@ CONFIG_FB=y
# CONFIG_FB_CFB_COPYAREA is not set
# CONFIG_FB_CFB_IMAGEBLIT is not set
# CONFIG_FB_SOFT_CURSOR is not set
# CONFIG_FB_MACMODES is not set
CONFIG_FB_MODE_HELPERS=y
# CONFIG_FB_TILEBLITTING is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_VIRTUAL is not set
#
@ -782,7 +792,6 @@ CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
#
CONFIG_PROC_FS=y
CONFIG_SYSFS=y
# CONFIG_DEVFS_FS is not set
# CONFIG_DEVPTS_FS_XATTR is not set
# CONFIG_TMPFS is not set
# CONFIG_HUGETLBFS is not set

View file

@ -26,6 +26,7 @@
* 03-Mar-2005 BJD Ensured that bast-cpld.h is included
* 10-Mar-2005 LCVR Changed S3C2410_VA to S3C24XX_VA
* 14-Mar-2006 BJD Updated for __iomem changes
* 22-Jun-2006 BJD Added DM9000 platform information
*/
#include <linux/kernel.h>
@ -35,6 +36,7 @@
#include <linux/timer.h>
#include <linux/init.h>
#include <linux/device.h>
#include <linux/dm9000.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
@ -53,6 +55,7 @@
#include <asm/arch/regs-serial.h>
#include <asm/arch/regs-gpio.h>
#include <asm/arch/regs-mem.h>
#include <asm/arch/regs-lcd.h>
#include <asm/arch/nand.h>
#include <linux/mtd/mtd.h>
@ -112,7 +115,6 @@ static struct map_desc bast_iodesc[] __initdata = {
{ VA_C2(BAST_VA_ISAMEM), PA_CS2(BAST_PA_ISAMEM), SZ_16M, MT_DEVICE },
{ VA_C2(BAST_VA_ASIXNET), PA_CS3(BAST_PA_ASIXNET), SZ_1M, MT_DEVICE },
{ VA_C2(BAST_VA_SUPERIO), PA_CS2(BAST_PA_SUPERIO), SZ_1M, MT_DEVICE },
{ VA_C2(BAST_VA_DM9000), PA_CS2(BAST_PA_DM9000), SZ_1M, MT_DEVICE },
{ VA_C2(BAST_VA_IDEPRI), PA_CS3(BAST_PA_IDEPRI), SZ_1M, MT_DEVICE },
{ VA_C2(BAST_VA_IDESEC), PA_CS3(BAST_PA_IDESEC), SZ_1M, MT_DEVICE },
{ VA_C2(BAST_VA_IDEPRIAUX), PA_CS3(BAST_PA_IDEPRIAUX), SZ_1M, MT_DEVICE },
@ -123,7 +125,6 @@ static struct map_desc bast_iodesc[] __initdata = {
{ VA_C3(BAST_VA_ISAMEM), PA_CS3(BAST_PA_ISAMEM), SZ_16M, MT_DEVICE },
{ VA_C3(BAST_VA_ASIXNET), PA_CS3(BAST_PA_ASIXNET), SZ_1M, MT_DEVICE },
{ VA_C3(BAST_VA_SUPERIO), PA_CS3(BAST_PA_SUPERIO), SZ_1M, MT_DEVICE },
{ VA_C3(BAST_VA_DM9000), PA_CS3(BAST_PA_DM9000), SZ_1M, MT_DEVICE },
{ VA_C3(BAST_VA_IDEPRI), PA_CS3(BAST_PA_IDEPRI), SZ_1M, MT_DEVICE },
{ VA_C3(BAST_VA_IDESEC), PA_CS3(BAST_PA_IDESEC), SZ_1M, MT_DEVICE },
{ VA_C3(BAST_VA_IDEPRIAUX), PA_CS3(BAST_PA_IDEPRIAUX), SZ_1M, MT_DEVICE },
@ -134,7 +135,6 @@ static struct map_desc bast_iodesc[] __initdata = {
{ VA_C4(BAST_VA_ISAMEM), PA_CS4(BAST_PA_ISAMEM), SZ_16M, MT_DEVICE },
{ VA_C4(BAST_VA_ASIXNET), PA_CS5(BAST_PA_ASIXNET), SZ_1M, MT_DEVICE },
{ VA_C4(BAST_VA_SUPERIO), PA_CS4(BAST_PA_SUPERIO), SZ_1M, MT_DEVICE },
{ VA_C4(BAST_VA_DM9000), PA_CS4(BAST_PA_DM9000), SZ_1M, MT_DEVICE },
{ VA_C4(BAST_VA_IDEPRI), PA_CS5(BAST_PA_IDEPRI), SZ_1M, MT_DEVICE },
{ VA_C4(BAST_VA_IDESEC), PA_CS5(BAST_PA_IDESEC), SZ_1M, MT_DEVICE },
{ VA_C4(BAST_VA_IDEPRIAUX), PA_CS5(BAST_PA_IDEPRIAUX), SZ_1M, MT_DEVICE },
@ -145,7 +145,6 @@ static struct map_desc bast_iodesc[] __initdata = {
{ VA_C5(BAST_VA_ISAMEM), PA_CS5(BAST_PA_ISAMEM), SZ_16M, MT_DEVICE },
{ VA_C5(BAST_VA_ASIXNET), PA_CS5(BAST_PA_ASIXNET), SZ_1M, MT_DEVICE },
{ VA_C5(BAST_VA_SUPERIO), PA_CS5(BAST_PA_SUPERIO), SZ_1M, MT_DEVICE },
{ VA_C5(BAST_VA_DM9000), PA_CS5(BAST_PA_DM9000), SZ_1M, MT_DEVICE },
{ VA_C5(BAST_VA_IDEPRI), PA_CS5(BAST_PA_IDEPRI), SZ_1M, MT_DEVICE },
{ VA_C5(BAST_VA_IDESEC), PA_CS5(BAST_PA_IDESEC), SZ_1M, MT_DEVICE },
{ VA_C5(BAST_VA_IDEPRIAUX), PA_CS5(BAST_PA_IDEPRIAUX), SZ_1M, MT_DEVICE },
@ -313,6 +312,45 @@ static struct s3c2410_platform_nand bast_nand_info = {
.select_chip = bast_nand_select,
};
/* DM9000 */
static struct resource bast_dm9k_resource[] = {
[0] = {
.start = S3C2410_CS5 + BAST_PA_DM9000,
.end = S3C2410_CS5 + BAST_PA_DM9000 + 3,
.flags = IORESOURCE_MEM
},
[1] = {
.start = S3C2410_CS5 + BAST_PA_DM9000 + 0x40,
.end = S3C2410_CS5 + BAST_PA_DM9000 + 0x40 + 0x3f,
.flags = IORESOURCE_MEM
},
[2] = {
.start = IRQ_DM9000,
.end = IRQ_DM9000,
.flags = IORESOURCE_IRQ
}
};
/* for the moment we limit ourselves to 16bit IO until some
* better IO routines can be written and tested
*/
struct dm9000_plat_data bast_dm9k_platdata = {
.flags = DM9000_PLATF_16BITONLY
};
static struct platform_device bast_device_dm9k = {
.name = "dm9000",
.id = 0,
.num_resources = ARRAY_SIZE(bast_dm9k_resource),
.resource = bast_dm9k_resource,
.dev = {
.platform_data = &bast_dm9k_platdata,
}
};
/* Standard BAST devices */
@ -324,7 +362,8 @@ static struct platform_device *bast_devices[] __initdata = {
&s3c_device_iis,
&s3c_device_rtc,
&s3c_device_nand,
&bast_device_nor
&bast_device_nor,
&bast_device_dm9k,
};
static struct clk *bast_clocks[] = {

View file

@ -27,6 +27,7 @@
* 10-Feb-2005 BJD Added power-off capability
* 10-Mar-2005 LCVR Changed S3C2410_VA to S3C24XX_VA
* 14-Mar-2006 BJD void __iomem fixes
* 22-Jun-2006 BJD Added DM9000 platform information
*/
#include <linux/kernel.h>
@ -35,6 +36,7 @@
#include <linux/list.h>
#include <linux/timer.h>
#include <linux/init.h>
#include <linux/dm9000.h>
#include <linux/serial.h>
#include <linux/tty.h>
@ -98,28 +100,24 @@ static struct map_desc vr1000_iodesc[] __initdata = {
* are only 8bit */
/* slow, byte */
{ VA_C2(VR1000_VA_DM9000), PA_CS2(VR1000_PA_DM9000), SZ_1M, MT_DEVICE },
{ VA_C2(VR1000_VA_IDEPRI), PA_CS3(VR1000_PA_IDEPRI), SZ_1M, MT_DEVICE },
{ VA_C2(VR1000_VA_IDESEC), PA_CS3(VR1000_PA_IDESEC), SZ_1M, MT_DEVICE },
{ VA_C2(VR1000_VA_IDEPRIAUX), PA_CS3(VR1000_PA_IDEPRIAUX), SZ_1M, MT_DEVICE },
{ VA_C2(VR1000_VA_IDESECAUX), PA_CS3(VR1000_PA_IDESECAUX), SZ_1M, MT_DEVICE },
/* slow, word */
{ VA_C3(VR1000_VA_DM9000), PA_CS3(VR1000_PA_DM9000), SZ_1M, MT_DEVICE },
{ VA_C3(VR1000_VA_IDEPRI), PA_CS3(VR1000_PA_IDEPRI), SZ_1M, MT_DEVICE },
{ VA_C3(VR1000_VA_IDESEC), PA_CS3(VR1000_PA_IDESEC), SZ_1M, MT_DEVICE },
{ VA_C3(VR1000_VA_IDEPRIAUX), PA_CS3(VR1000_PA_IDEPRIAUX), SZ_1M, MT_DEVICE },
{ VA_C3(VR1000_VA_IDESECAUX), PA_CS3(VR1000_PA_IDESECAUX), SZ_1M, MT_DEVICE },
/* fast, byte */
{ VA_C4(VR1000_VA_DM9000), PA_CS4(VR1000_PA_DM9000), SZ_1M, MT_DEVICE },
{ VA_C4(VR1000_VA_IDEPRI), PA_CS5(VR1000_PA_IDEPRI), SZ_1M, MT_DEVICE },
{ VA_C4(VR1000_VA_IDESEC), PA_CS5(VR1000_PA_IDESEC), SZ_1M, MT_DEVICE },
{ VA_C4(VR1000_VA_IDEPRIAUX), PA_CS5(VR1000_PA_IDEPRIAUX), SZ_1M, MT_DEVICE },
{ VA_C4(VR1000_VA_IDESECAUX), PA_CS5(VR1000_PA_IDESECAUX), SZ_1M, MT_DEVICE },
/* fast, word */
{ VA_C5(VR1000_VA_DM9000), PA_CS5(VR1000_PA_DM9000), SZ_1M, MT_DEVICE },
{ VA_C5(VR1000_VA_IDEPRI), PA_CS5(VR1000_PA_IDEPRI), SZ_1M, MT_DEVICE },
{ VA_C5(VR1000_VA_IDESEC), PA_CS5(VR1000_PA_IDESEC), SZ_1M, MT_DEVICE },
{ VA_C5(VR1000_VA_IDEPRIAUX), PA_CS5(VR1000_PA_IDEPRIAUX), SZ_1M, MT_DEVICE },
@ -246,6 +244,74 @@ static struct platform_device vr1000_nor = {
.resource = vr1000_nor_resource,
};
/* DM9000 ethernet devices */
static struct resource vr1000_dm9k0_resource[] = {
[0] = {
.start = S3C2410_CS5 + VR1000_PA_DM9000,
.end = S3C2410_CS5 + VR1000_PA_DM9000 + 3,
.flags = IORESOURCE_MEM
},
[1] = {
.start = S3C2410_CS5 + VR1000_PA_DM9000 + 0x40,
.end = S3C2410_CS5 + VR1000_PA_DM9000 + 0x7f,
.flags = IORESOURCE_MEM
},
[2] = {
.start = IRQ_VR1000_DM9000A,
.end = IRQ_VR1000_DM9000A,
.flags = IORESOURCE_IRQ
}
};
static struct resource vr1000_dm9k1_resource[] = {
[0] = {
.start = S3C2410_CS5 + VR1000_PA_DM9000 + 0x80,
.end = S3C2410_CS5 + VR1000_PA_DM9000 + 0x83,
.flags = IORESOURCE_MEM
},
[1] = {
.start = S3C2410_CS5 + VR1000_PA_DM9000 + 0xC0,
.end = S3C2410_CS5 + VR1000_PA_DM9000 + 0xFF,
.flags = IORESOURCE_MEM
},
[2] = {
.start = IRQ_VR1000_DM9000N,
.end = IRQ_VR1000_DM9000N,
.flags = IORESOURCE_IRQ
}
};
/* for the moment we limit ourselves to 16bit IO until some
* better IO routines can be written and tested
*/
struct dm9000_plat_data vr1000_dm9k_platdata = {
.flags = DM9000_PLATF_16BITONLY,
};
static struct platform_device vr1000_dm9k0 = {
.name = "dm9000",
.id = 0,
.num_resources = ARRAY_SIZE(vr1000_dm9k0_resource),
.resource = vr1000_dm9k0_resource,
.dev = {
.platform_data = &vr1000_dm9k_platdata,
}
};
static struct platform_device vr1000_dm9k1 = {
.name = "dm9000",
.id = 1,
.num_resources = ARRAY_SIZE(vr1000_dm9k1_resource),
.resource = vr1000_dm9k1_resource,
.dev = {
.platform_data = &vr1000_dm9k_platdata,
}
};
/* devices for this board */
static struct platform_device *vr1000_devices[] __initdata = {
&s3c_device_usb,
@ -253,8 +319,11 @@ static struct platform_device *vr1000_devices[] __initdata = {
&s3c_device_wdt,
&s3c_device_i2c,
&s3c_device_iis,
&s3c_device_adc,
&serial_device,
&vr1000_nor,
&vr1000_dm9k0,
&vr1000_dm9k1
};
static struct clk *vr1000_clocks[] = {

View file

@ -563,8 +563,14 @@ static bits64 estimateDiv128To64( bits64 a0, bits64 a1, bits64 b )
bits64 rem0, rem1, term0, term1;
bits64 z;
if ( b <= a0 ) return LIT64( 0xFFFFFFFFFFFFFFFF );
b0 = b>>32;
z = ( b0<<32 <= a0 ) ? LIT64( 0xFFFFFFFF00000000 ) : ( a0 / b0 )<<32;
b0 = b>>32; /* hence b0 is 32 bits wide now */
if ( b0<<32 <= a0 ) {
z = LIT64( 0xFFFFFFFF00000000 );
} else {
z = a0;
do_div( z, b0 );
z <<= 32;
}
mul64To128( b, z, &term0, &term1 );
sub128( a0, a1, term0, term1, &rem0, &rem1 );
while ( ( (sbits64) rem0 ) < 0 ) {
@ -573,7 +579,12 @@ static bits64 estimateDiv128To64( bits64 a0, bits64 a1, bits64 b )
add128( rem0, rem1, b0, b1, &rem0, &rem1 );
}
rem0 = ( rem0<<32 ) | ( rem1>>32 );
z |= ( b0<<32 <= rem0 ) ? 0xFFFFFFFF : rem0 / b0;
if ( b0<<32 <= rem0 ) {
z |= 0xFFFFFFFF;
} else {
do_div( rem0, b0 );
z |= rem0;
}
return z;
}
@ -601,6 +612,7 @@ static bits32 estimateSqrt32( int16 aExp, bits32 a )
};
int8 index;
bits32 z;
bits64 A;
index = ( a>>27 ) & 15;
if ( aExp & 1 ) {
@ -614,7 +626,9 @@ static bits32 estimateSqrt32( int16 aExp, bits32 a )
z = ( 0x20000 <= z ) ? 0xFFFF8000 : ( z<<15 );
if ( z <= a ) return (bits32) ( ( (sbits32) a )>>1 );
}
return ( (bits32) ( ( ( (bits64) a )<<31 ) / z ) ) + ( z>>1 );
A = ( (bits64) a )<<31;
do_div( A, z );
return ( (bits32) A ) + ( z>>1 );
}

View file

@ -28,6 +28,8 @@ this code that are retained.
===============================================================================
*/
#include <asm/div64.h>
#include "fpa11.h"
//#include "milieu.h"
//#include "softfloat.h"
@ -1331,7 +1333,11 @@ float32 float32_div( float32 a, float32 b )
aSig >>= 1;
++zExp;
}
zSig = ( ( (bits64) aSig )<<32 ) / bSig;
{
bits64 tmp = ( (bits64) aSig )<<32;
do_div( tmp, bSig );
zSig = tmp;
}
if ( ( zSig & 0x3F ) == 0 ) {
zSig |= ( ( (bits64) bSig ) * zSig != ( (bits64) aSig )<<32 );
}
@ -1397,7 +1403,9 @@ float32 float32_rem( float32 a, float32 b )
q = ( bSig <= aSig );
if ( q ) aSig -= bSig;
if ( 0 < expDiff ) {
q = ( ( (bits64) aSig )<<32 ) / bSig;
bits64 tmp = ( (bits64) aSig )<<32;
do_div( tmp, bSig );
q = tmp;
q >>= 32 - expDiff;
bSig >>= 2;
aSig = ( ( aSig>>1 )<<( expDiff - 1 ) ) - bSig * q;

View file

@ -179,6 +179,8 @@ config CMDLINE
time by entering them here. As a minimum, you should specify the
memory size and the root device (e.g., mem=64M root=/dev/nfs).
source "mm/Kconfig"
endmenu
source "drivers/base/Kconfig"

View file

@ -23,8 +23,8 @@
# User may have a custom install script
if [ -x /sbin/installkernel ]; then
exec /sbin/installkernel "$@"
if [ -x /sbin/${CROSS_COMPILE}installkernel ]; then
exec /sbin/${CROSS_COMPILE}installkernel "$@"
fi
if [ "$2" = "zImage" ]; then

View file

@ -74,6 +74,8 @@ config PREEMPT
Say Y here if you are building a kernel for a desktop, embedded
or real-time system. Say N if you are unsure.
source mm/Kconfig
endmenu
menu "Hardware setup"

View file

@ -74,6 +74,8 @@ config HIGHPTE
with a lot of RAM, this can be wasteful of precious low memory.
Setting this option will put user-space page tables in high memory.
source "mm/Kconfig"
choice
prompt "uClinux kernel load address"
depends on !MMU

View file

@ -180,4 +180,7 @@ config CPU_H8S
config PREEMPT
bool "Preemptible Kernel"
default n
source "mm/Kconfig"
endmenu

View file

@ -245,12 +245,12 @@ static unsigned short *getnextpc(struct task_struct *child, unsigned short *pc)
addr = h8300_get_reg(child, regno-1+PT_ER1);
return (unsigned short *)addr;
case relb:
if ((inst = 0x55) || isbranch(child,inst & 0x0f))
if (inst == 0x55 || isbranch(child,inst & 0x0f))
pc = (unsigned short *)((unsigned long)pc +
((signed char)(*fetch_p)));
return pc+1; /* skip myself */
case relw:
if ((inst = 0x5c) || isbranch(child,(*fetch_p & 0xf0) >> 4))
if (inst == 0x5c || isbranch(child,(*fetch_p & 0xf0) >> 4))
pc = (unsigned short *)((unsigned long)pc +
((signed short)(*(pc+1))));
return pc+2; /* skip myself */

View file

@ -68,7 +68,6 @@ config X86_VOYAGER
config X86_NUMAQ
bool "NUMAQ (IBM/Sequent)"
select DISCONTIGMEM
select NUMA
help
This option is used for getting Linux to run on a (IBM/Sequent) NUMA
@ -783,26 +782,49 @@ comment "NUMA (NUMA-Q) requires SMP, 64GB highmem support"
comment "NUMA (Summit) requires SMP, 64GB highmem support, ACPI"
depends on X86_SUMMIT && (!HIGHMEM64G || !ACPI)
config DISCONTIGMEM
bool
depends on NUMA
default y
config HAVE_ARCH_BOOTMEM_NODE
bool
depends on NUMA
default y
config HAVE_MEMORY_PRESENT
config ARCH_HAVE_MEMORY_PRESENT
bool
depends on DISCONTIGMEM
default y
config NEED_NODE_MEMMAP_SIZE
bool
depends on DISCONTIGMEM
depends on DISCONTIGMEM || SPARSEMEM
default y
config HAVE_ARCH_ALLOC_REMAP
bool
depends on NUMA
default y
config ARCH_DISCONTIGMEM_ENABLE
def_bool y
depends on NUMA
config ARCH_DISCONTIGMEM_DEFAULT
def_bool y
depends on NUMA
config ARCH_SPARSEMEM_ENABLE
def_bool y
depends on NUMA
config ARCH_SELECT_MEMORY_MODEL
def_bool y
depends on ARCH_SPARSEMEM_ENABLE
source "mm/Kconfig"
config HAVE_ARCH_EARLY_PFN_TO_NID
bool
default y
depends on NUMA
config HIGHPTE
bool "Allocate 3rd-level pagetables from highmem"
depends on HIGHMEM4G || HIGHMEM64G
@ -939,6 +961,8 @@ config SECCOMP
If unsure, say Y. Only embedded should say N here.
source kernel/Kconfig.hz
endmenu

View file

@ -17,6 +17,13 @@
# 20050320 Kianusch Sayah Karadji <kianusch@sk-tech.net>
# Added support for GEODE CPU
HAS_BIARCH := $(call cc-option-yn, -m32)
ifeq ($(HAS_BIARCH),y)
AS := $(AS) --32
LD := $(LD) -m elf_i386
CC := $(CC) -m32
endif
LDFLAGS := -m elf_i386
OBJCOPYFLAGS := -O binary -R .note -R .comment -S
LDFLAGS_vmlinux :=

View file

@ -21,8 +21,8 @@
# User may have a custom install script
if [ -x ~/bin/installkernel ]; then exec ~/bin/installkernel "$@"; fi
if [ -x /sbin/installkernel ]; then exec /sbin/installkernel "$@"; fi
if [ -x ~/bin/${CROSS_COMPILE}installkernel ]; then exec ~/bin/${CROSS_COMPILE}installkernel "$@"; fi
if [ -x /sbin/${CROSS_COMPILE}installkernel ]; then exec /sbin/${CROSS_COMPILE}installkernel "$@"; fi
# Default install - same as make zlilo

View file

@ -1133,7 +1133,7 @@ inline void smp_local_timer_interrupt(struct pt_regs * regs)
}
#ifdef CONFIG_SMP
update_process_times(user_mode(regs));
update_process_times(user_mode_vm(regs));
#endif
}

View file

@ -635,7 +635,7 @@ void __init cpu_init (void)
/* Clear all 6 debug registers: */
#define CD(register) __asm__("movl %0,%%db" #register ::"r"(0) );
#define CD(register) set_debugreg(0, register)
CD(0); CD(1); CD(2); CD(3); /* no db4 and db5 */; CD(6); CD(7);

View file

@ -375,6 +375,19 @@ int mtrr_add_page(unsigned long base, unsigned long size,
return error;
}
static int mtrr_check(unsigned long base, unsigned long size)
{
if ((base & (PAGE_SIZE - 1)) || (size & (PAGE_SIZE - 1))) {
printk(KERN_WARNING
"mtrr: size and base must be multiples of 4 kiB\n");
printk(KERN_DEBUG
"mtrr: size: 0x%lx base: 0x%lx\n", size, base);
dump_stack();
return -1;
}
return 0;
}
/**
* mtrr_add - Add a memory type region
* @base: Physical base address of region
@ -415,11 +428,8 @@ int
mtrr_add(unsigned long base, unsigned long size, unsigned int type,
char increment)
{
if ((base & (PAGE_SIZE - 1)) || (size & (PAGE_SIZE - 1))) {
printk(KERN_WARNING "mtrr: size and base must be multiples of 4 kiB\n");
printk(KERN_DEBUG "mtrr: size: 0x%lx base: 0x%lx\n", size, base);
if (mtrr_check(base, size))
return -EINVAL;
}
return mtrr_add_page(base >> PAGE_SHIFT, size >> PAGE_SHIFT, type,
increment);
}
@ -511,11 +521,8 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
int
mtrr_del(int reg, unsigned long base, unsigned long size)
{
if ((base & (PAGE_SIZE - 1)) || (size & (PAGE_SIZE - 1))) {
printk(KERN_INFO "mtrr: size and base must be multiples of 4 kiB\n");
printk(KERN_DEBUG "mtrr: size: 0x%lx base: 0x%lx\n", size, base);
if (mtrr_check(base, size))
return -EINVAL;
}
return mtrr_del_page(reg, base >> PAGE_SHIFT, size >> PAGE_SHIFT);
}

View file

@ -86,7 +86,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
seq_printf(m, "stepping\t: unknown\n");
if ( cpu_has(c, X86_FEATURE_TSC) ) {
seq_printf(m, "cpu MHz\t\t: %lu.%03lu\n",
seq_printf(m, "cpu MHz\t\t: %u.%03u\n",
cpu_khz / 1000, (cpu_khz % 1000));
}

View file

@ -1,97 +1,17 @@
#include <linux/config.h>
#include <linux/module.h>
#include <linux/smp.h>
#include <linux/user.h>
#include <linux/elfcore.h>
#include <linux/mca.h>
#include <linux/sched.h>
#include <linux/in6.h>
#include <linux/interrupt.h>
#include <linux/smp_lock.h>
#include <linux/pm.h>
#include <linux/pci.h>
#include <linux/apm_bios.h>
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/tty.h>
#include <linux/highmem.h>
#include <linux/time.h>
#include <asm/semaphore.h>
#include <asm/processor.h>
#include <asm/i387.h>
#include <asm/uaccess.h>
#include <asm/checksum.h>
#include <asm/io.h>
#include <asm/delay.h>
#include <asm/irq.h>
#include <asm/mmx.h>
#include <asm/desc.h>
#include <asm/pgtable.h>
#include <asm/tlbflush.h>
#include <asm/nmi.h>
#include <asm/ist.h>
#include <asm/kdebug.h>
extern void dump_thread(struct pt_regs *, struct user *);
extern spinlock_t rtc_lock;
/* This is definitely a GPL-only symbol */
EXPORT_SYMBOL_GPL(cpu_gdt_table);
#if defined(CONFIG_APM_MODULE)
extern void machine_real_restart(unsigned char *, int);
EXPORT_SYMBOL(machine_real_restart);
extern void default_idle(void);
EXPORT_SYMBOL(default_idle);
#endif
#ifdef CONFIG_SMP
extern void FASTCALL( __write_lock_failed(rwlock_t *rw));
extern void FASTCALL( __read_lock_failed(rwlock_t *rw));
#endif
#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_HD) || defined(CONFIG_BLK_DEV_IDE_MODULE) || defined(CONFIG_BLK_DEV_HD_MODULE)
extern struct drive_info_struct drive_info;
EXPORT_SYMBOL(drive_info);
#endif
extern unsigned long cpu_khz;
extern unsigned long get_cmos_time(void);
/* platform dependent support */
EXPORT_SYMBOL(boot_cpu_data);
#ifdef CONFIG_DISCONTIGMEM
EXPORT_SYMBOL(node_data);
EXPORT_SYMBOL(physnode_map);
#endif
#ifdef CONFIG_X86_NUMAQ
EXPORT_SYMBOL(xquad_portio);
#endif
EXPORT_SYMBOL(dump_thread);
EXPORT_SYMBOL(dump_fpu);
EXPORT_SYMBOL_GPL(kernel_fpu_begin);
EXPORT_SYMBOL(__ioremap);
EXPORT_SYMBOL(ioremap_nocache);
EXPORT_SYMBOL(iounmap);
EXPORT_SYMBOL(kernel_thread);
EXPORT_SYMBOL(pm_idle);
EXPORT_SYMBOL(pm_power_off);
EXPORT_SYMBOL(get_cmos_time);
EXPORT_SYMBOL(cpu_khz);
EXPORT_SYMBOL(apm_info);
EXPORT_SYMBOL(__down_failed);
EXPORT_SYMBOL(__down_failed_interruptible);
EXPORT_SYMBOL(__down_failed_trylock);
EXPORT_SYMBOL(__up_wakeup);
/* Networking helper routines. */
EXPORT_SYMBOL(csum_partial_copy_generic);
/* Delay loops */
EXPORT_SYMBOL(__ndelay);
EXPORT_SYMBOL(__udelay);
EXPORT_SYMBOL(__delay);
EXPORT_SYMBOL(__const_udelay);
EXPORT_SYMBOL(__get_user_1);
EXPORT_SYMBOL(__get_user_2);
@ -105,87 +25,11 @@ EXPORT_SYMBOL(__put_user_8);
EXPORT_SYMBOL(strpbrk);
EXPORT_SYMBOL(strstr);
EXPORT_SYMBOL(strncpy_from_user);
EXPORT_SYMBOL(__strncpy_from_user);
EXPORT_SYMBOL(clear_user);
EXPORT_SYMBOL(__clear_user);
EXPORT_SYMBOL(__copy_from_user_ll);
EXPORT_SYMBOL(__copy_to_user_ll);
EXPORT_SYMBOL(strnlen_user);
EXPORT_SYMBOL(dma_alloc_coherent);
EXPORT_SYMBOL(dma_free_coherent);
#ifdef CONFIG_PCI
EXPORT_SYMBOL(pci_mem_start);
#endif
#ifdef CONFIG_PCI_BIOS
EXPORT_SYMBOL(pcibios_set_irq_routing);
EXPORT_SYMBOL(pcibios_get_irq_routing_table);
#endif
#ifdef CONFIG_X86_USE_3DNOW
EXPORT_SYMBOL(_mmx_memcpy);
EXPORT_SYMBOL(mmx_clear_page);
EXPORT_SYMBOL(mmx_copy_page);
#endif
#ifdef CONFIG_X86_HT
EXPORT_SYMBOL(smp_num_siblings);
EXPORT_SYMBOL(cpu_sibling_map);
#endif
#ifdef CONFIG_SMP
EXPORT_SYMBOL(cpu_data);
EXPORT_SYMBOL(cpu_online_map);
EXPORT_SYMBOL(cpu_callout_map);
extern void FASTCALL( __write_lock_failed(rwlock_t *rw));
extern void FASTCALL( __read_lock_failed(rwlock_t *rw));
EXPORT_SYMBOL(__write_lock_failed);
EXPORT_SYMBOL(__read_lock_failed);
/* Global SMP stuff */
EXPORT_SYMBOL(smp_call_function);
/* TLB flushing */
EXPORT_SYMBOL(flush_tlb_page);
#endif
#ifdef CONFIG_X86_IO_APIC
EXPORT_SYMBOL(IO_APIC_get_PCI_irq_vector);
#endif
#ifdef CONFIG_MCA
EXPORT_SYMBOL(machine_id);
#endif
#ifdef CONFIG_VT
EXPORT_SYMBOL(screen_info);
#endif
EXPORT_SYMBOL(get_wchan);
EXPORT_SYMBOL(rtc_lock);
EXPORT_SYMBOL_GPL(set_nmi_callback);
EXPORT_SYMBOL_GPL(unset_nmi_callback);
EXPORT_SYMBOL(register_die_notifier);
#ifdef CONFIG_HAVE_DEC_LOCK
EXPORT_SYMBOL(_atomic_dec_and_lock);
#endif
EXPORT_SYMBOL(__PAGE_KERNEL);
#ifdef CONFIG_HIGHMEM
EXPORT_SYMBOL(kmap);
EXPORT_SYMBOL(kunmap);
EXPORT_SYMBOL(kmap_atomic);
EXPORT_SYMBOL(kunmap_atomic);
EXPORT_SYMBOL(kmap_atomic_to_page);
#endif
#if defined(CONFIG_X86_SPEEDSTEP_SMI) || defined(CONFIG_X86_SPEEDSTEP_SMI_MODULE)
EXPORT_SYMBOL(ist_info);
#endif
EXPORT_SYMBOL(csum_partial);

View file

@ -10,6 +10,7 @@
#include <linux/config.h>
#include <linux/sched.h>
#include <linux/module.h>
#include <asm/processor.h>
#include <asm/i387.h>
#include <asm/math_emu.h>
@ -79,6 +80,7 @@ void kernel_fpu_begin(void)
}
clts();
}
EXPORT_SYMBOL_GPL(kernel_fpu_begin);
void restore_fpu( struct task_struct *tsk )
{
@ -526,6 +528,7 @@ int dump_fpu( struct pt_regs *regs, struct user_i387_struct *fpu )
return fpvalid;
}
EXPORT_SYMBOL(dump_fpu);
int dump_task_fpu(struct task_struct *tsk, struct user_i387_struct *fpu)
{

View file

@ -31,7 +31,7 @@
#include <linux/mc146818rtc.h>
#include <linux/compiler.h>
#include <linux/acpi.h>
#include <linux/module.h>
#include <linux/sysdev.h>
#include <asm/io.h>
#include <asm/smp.h>
@ -812,6 +812,7 @@ int IO_APIC_get_PCI_irq_vector(int bus, int slot, int pin)
}
return best_guess;
}
EXPORT_SYMBOL(IO_APIC_get_PCI_irq_vector);
/*
* This function currently is only a helper for the i386 smp boot process where
@ -1658,6 +1659,12 @@ static void __init setup_ioapic_ids_from_mpc(void)
unsigned char old_id;
unsigned long flags;
/*
* Don't check I/O APIC IDs for xAPIC systems. They have
* no meaning without the serial APIC bus.
*/
if (!(boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && boot_cpu_data.x86 < 15))
return;
/*
* This is broken; anything with a real cpu count has to
* circumvent this idiocy regardless.
@ -1684,10 +1691,6 @@ static void __init setup_ioapic_ids_from_mpc(void)
mp_ioapics[apic].mpc_apicid = reg_00.bits.ID;
}
/* Don't check I/O APIC IDs for some xAPIC systems. They have
* no meaning without the serial APIC bus. */
if (NO_IOAPIC_CHECK)
continue;
/*
* Sanity check, is the ID really free? Every APIC in a
* system must have a unique ID or we get lots of nice

View file

@ -23,6 +23,9 @@
* Rusty Russell).
* 2004-July Suparna Bhattacharya <suparna@in.ibm.com> added jumper probes
* interface to access function arguments.
* 2005-May Hien Nguyen <hien@us.ibm.com>, Jim Keniston
* <jkenisto@us.ibm.com> and Prasanna S Panchamukhi
* <prasanna@in.ibm.com> added function-return probes.
*/
#include <linux/config.h>
@ -30,15 +33,14 @@
#include <linux/ptrace.h>
#include <linux/spinlock.h>
#include <linux/preempt.h>
#include <asm/cacheflush.h>
#include <asm/kdebug.h>
#include <asm/desc.h>
/* kprobe_status settings */
#define KPROBE_HIT_ACTIVE 0x00000001
#define KPROBE_HIT_SS 0x00000002
static struct kprobe *current_kprobe;
static unsigned long kprobe_status, kprobe_old_eflags, kprobe_saved_eflags;
static struct kprobe *kprobe_prev;
static unsigned long kprobe_status_prev, kprobe_old_eflags_prev, kprobe_saved_eflags_prev;
static struct pt_regs jprobe_saved_regs;
static long *jprobe_saved_esp;
/* copy of the kernel stack at the probe fire time */
@ -68,16 +70,50 @@ int arch_prepare_kprobe(struct kprobe *p)
void arch_copy_kprobe(struct kprobe *p)
{
memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
p->opcode = *p->addr;
}
void arch_arm_kprobe(struct kprobe *p)
{
*p->addr = BREAKPOINT_INSTRUCTION;
flush_icache_range((unsigned long) p->addr,
(unsigned long) p->addr + sizeof(kprobe_opcode_t));
}
void arch_disarm_kprobe(struct kprobe *p)
{
*p->addr = p->opcode;
flush_icache_range((unsigned long) p->addr,
(unsigned long) p->addr + sizeof(kprobe_opcode_t));
}
void arch_remove_kprobe(struct kprobe *p)
{
}
static inline void disarm_kprobe(struct kprobe *p, struct pt_regs *regs)
static inline void save_previous_kprobe(void)
{
*p->addr = p->opcode;
regs->eip = (unsigned long)p->addr;
kprobe_prev = current_kprobe;
kprobe_status_prev = kprobe_status;
kprobe_old_eflags_prev = kprobe_old_eflags;
kprobe_saved_eflags_prev = kprobe_saved_eflags;
}
static inline void restore_previous_kprobe(void)
{
current_kprobe = kprobe_prev;
kprobe_status = kprobe_status_prev;
kprobe_old_eflags = kprobe_old_eflags_prev;
kprobe_saved_eflags = kprobe_saved_eflags_prev;
}
static inline void set_current_kprobe(struct kprobe *p, struct pt_regs *regs)
{
current_kprobe = p;
kprobe_saved_eflags = kprobe_old_eflags
= (regs->eflags & (TF_MASK | IF_MASK));
if (is_IF_modifier(p->opcode))
kprobe_saved_eflags &= ~IF_MASK;
}
static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
@ -91,6 +127,50 @@ static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
regs->eip = (unsigned long)&p->ainsn.insn;
}
struct task_struct *arch_get_kprobe_task(void *ptr)
{
return ((struct thread_info *) (((unsigned long) ptr) &
(~(THREAD_SIZE -1))))->task;
}
void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs)
{
unsigned long *sara = (unsigned long *)&regs->esp;
struct kretprobe_instance *ri;
static void *orig_ret_addr;
/*
* Save the return address when the return probe hits
* the first time, and use it to populate the (krprobe
* instance)->ret_addr for subsequent return probes at
* the same addrress since stack address would have
* the kretprobe_trampoline by then.
*/
if (((void*) *sara) != kretprobe_trampoline)
orig_ret_addr = (void*) *sara;
if ((ri = get_free_rp_inst(rp)) != NULL) {
ri->rp = rp;
ri->stack_addr = sara;
ri->ret_addr = orig_ret_addr;
add_rp_inst(ri);
/* Replace the return addr with trampoline addr */
*sara = (unsigned long) &kretprobe_trampoline;
} else {
rp->nmissed++;
}
}
void arch_kprobe_flush_task(struct task_struct *tk)
{
struct kretprobe_instance *ri;
while ((ri = get_rp_inst_tsk(tk)) != NULL) {
*((unsigned long *)(ri->stack_addr)) =
(unsigned long) ri->ret_addr;
recycle_rp_inst(ri);
}
}
/*
* Interrupts are disabled on entry as trap3 is an interrupt gate and they
* remain disabled thorough out this function.
@ -127,8 +207,18 @@ static int kprobe_handler(struct pt_regs *regs)
unlock_kprobes();
goto no_kprobe;
}
disarm_kprobe(p, regs);
ret = 1;
/* We have reentered the kprobe_handler(), since
* another probe was hit while within the handler.
* We here save the original kprobes variables and
* just single step on the instruction of the new probe
* without calling any user handlers.
*/
save_previous_kprobe();
set_current_kprobe(p, regs);
p->nmissed++;
prepare_singlestep(p, regs);
kprobe_status = KPROBE_REENTER;
return 1;
} else {
p = current_kprobe;
if (p->break_handler && p->break_handler(p, regs)) {
@ -163,11 +253,7 @@ static int kprobe_handler(struct pt_regs *regs)
}
kprobe_status = KPROBE_HIT_ACTIVE;
current_kprobe = p;
kprobe_saved_eflags = kprobe_old_eflags
= (regs->eflags & (TF_MASK | IF_MASK));
if (is_IF_modifier(p->opcode))
kprobe_saved_eflags &= ~IF_MASK;
set_current_kprobe(p, regs);
if (p->pre_handler && p->pre_handler(p, regs))
/* handler has already set things up, so skip ss setup */
@ -183,6 +269,55 @@ no_kprobe:
return ret;
}
/*
* For function-return probes, init_kprobes() establishes a probepoint
* here. When a retprobed function returns, this probe is hit and
* trampoline_probe_handler() runs, calling the kretprobe's handler.
*/
void kretprobe_trampoline_holder(void)
{
asm volatile ( ".global kretprobe_trampoline\n"
"kretprobe_trampoline: \n"
"nop\n");
}
/*
* Called when we hit the probe point at kretprobe_trampoline
*/
int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
{
struct task_struct *tsk;
struct kretprobe_instance *ri;
struct hlist_head *head;
struct hlist_node *node;
unsigned long *sara = ((unsigned long *) &regs->esp) - 1;
tsk = arch_get_kprobe_task(sara);
head = kretprobe_inst_table_head(tsk);
hlist_for_each_entry(ri, node, head, hlist) {
if (ri->stack_addr == sara && ri->rp) {
if (ri->rp->handler)
ri->rp->handler(ri, regs);
}
}
return 0;
}
void trampoline_post_handler(struct kprobe *p, struct pt_regs *regs,
unsigned long flags)
{
struct kretprobe_instance *ri;
/* RA already popped */
unsigned long *sara = ((unsigned long *)&regs->esp) - 1;
while ((ri = get_rp_inst(sara))) {
regs->eip = (unsigned long)ri->ret_addr;
recycle_rp_inst(ri);
}
regs->eflags &= ~TF_MASK;
}
/*
* Called after single-stepping. p->addr is the address of the
* instruction whose first byte has been replaced by the "int 3"
@ -263,13 +398,22 @@ static inline int post_kprobe_handler(struct pt_regs *regs)
if (!kprobe_running())
return 0;
if (current_kprobe->post_handler)
if ((kprobe_status != KPROBE_REENTER) && current_kprobe->post_handler) {
kprobe_status = KPROBE_HIT_SSDONE;
current_kprobe->post_handler(current_kprobe, regs, 0);
}
resume_execution(current_kprobe, regs);
if (current_kprobe->post_handler != trampoline_post_handler)
resume_execution(current_kprobe, regs);
regs->eflags |= kprobe_saved_eflags;
/*Restore back the original saved kprobes variables and continue. */
if (kprobe_status == KPROBE_REENTER) {
restore_previous_kprobe();
goto out;
}
unlock_kprobes();
out:
preempt_enable_no_resched();
/*

View file

@ -914,7 +914,10 @@ void __init mp_register_ioapic (
mp_ioapics[idx].mpc_apicaddr = address;
set_fixmap_nocache(FIX_IO_APIC_BASE_0 + idx, address);
mp_ioapics[idx].mpc_apicid = io_apic_get_unique_id(idx, id);
if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) && (boot_cpu_data.x86 < 15))
mp_ioapics[idx].mpc_apicid = io_apic_get_unique_id(idx, id);
else
mp_ioapics[idx].mpc_apicid = id;
mp_ioapics[idx].mpc_apicver = io_apic_get_version(idx);
/*
@ -1055,11 +1058,20 @@ void __init mp_config_acpi_legacy_irqs (void)
}
}
#define MAX_GSI_NUM 4096
int mp_register_gsi (u32 gsi, int edge_level, int active_high_low)
{
int ioapic = -1;
int ioapic_pin = 0;
int idx, bit = 0;
static int pci_irq = 16;
/*
* Mapping between Global System Interrups, which
* represent all possible interrupts, and IRQs
* assigned to actual devices.
*/
static int gsi_to_irq[MAX_GSI_NUM];
#ifdef CONFIG_ACPI_BUS
/* Don't set up the ACPI SCI because it's already set up */
@ -1094,11 +1106,26 @@ int mp_register_gsi (u32 gsi, int edge_level, int active_high_low)
if ((1<<bit) & mp_ioapic_routing[ioapic].pin_programmed[idx]) {
Dprintk(KERN_DEBUG "Pin %d-%d already programmed\n",
mp_ioapic_routing[ioapic].apic_id, ioapic_pin);
return gsi;
return gsi_to_irq[gsi];
}
mp_ioapic_routing[ioapic].pin_programmed[idx] |= (1<<bit);
if (edge_level) {
/*
* For PCI devices assign IRQs in order, avoiding gaps
* due to unused I/O APIC pins.
*/
int irq = gsi;
if (gsi < MAX_GSI_NUM) {
gsi = pci_irq++;
gsi_to_irq[irq] = gsi;
} else {
printk(KERN_ERR "GSI %u is too high\n", gsi);
return gsi;
}
}
io_apic_set_pci_routing(ioapic, ioapic_pin, gsi,
edge_level == ACPI_EDGE_SENSITIVE ? 0 : 1,
active_high_low == ACPI_ACTIVE_HIGH ? 0 : 1);

View file

@ -28,8 +28,7 @@
#include <linux/sysctl.h>
#include <asm/smp.h>
#include <asm/mtrr.h>
#include <asm/mpspec.h>
#include <asm/div64.h>
#include <asm/nmi.h>
#include "mach_traps.h"
@ -324,6 +323,16 @@ static void clear_msr_range(unsigned int base, unsigned int n)
wrmsr(base+i, 0, 0);
}
static inline void write_watchdog_counter(const char *descr)
{
u64 count = (u64)cpu_khz * 1000;
do_div(count, nmi_hz);
if(descr)
Dprintk("setting %s to -0x%08Lx\n", descr, count);
wrmsrl(nmi_perfctr_msr, 0 - count);
}
static void setup_k7_watchdog(void)
{
unsigned int evntsel;
@ -339,8 +348,7 @@ static void setup_k7_watchdog(void)
| K7_NMI_EVENT;
wrmsr(MSR_K7_EVNTSEL0, evntsel, 0);
Dprintk("setting K7_PERFCTR0 to %08lx\n", -(cpu_khz/nmi_hz*1000));
wrmsr(MSR_K7_PERFCTR0, -(cpu_khz/nmi_hz*1000), -1);
write_watchdog_counter("K7_PERFCTR0");
apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= K7_EVNTSEL_ENABLE;
wrmsr(MSR_K7_EVNTSEL0, evntsel, 0);
@ -361,8 +369,7 @@ static void setup_p6_watchdog(void)
| P6_NMI_EVENT;
wrmsr(MSR_P6_EVNTSEL0, evntsel, 0);
Dprintk("setting P6_PERFCTR0 to %08lx\n", -(cpu_khz/nmi_hz*1000));
wrmsr(MSR_P6_PERFCTR0, -(cpu_khz/nmi_hz*1000), 0);
write_watchdog_counter("P6_PERFCTR0");
apic_write(APIC_LVTPC, APIC_DM_NMI);
evntsel |= P6_EVNTSEL0_ENABLE;
wrmsr(MSR_P6_EVNTSEL0, evntsel, 0);
@ -402,8 +409,7 @@ static int setup_p4_watchdog(void)
wrmsr(MSR_P4_CRU_ESCR0, P4_NMI_CRU_ESCR0, 0);
wrmsr(MSR_P4_IQ_CCCR0, P4_NMI_IQ_CCCR0 & ~P4_CCCR_ENABLE, 0);
Dprintk("setting P4_IQ_COUNTER0 to 0x%08lx\n", -(cpu_khz/nmi_hz*1000));
wrmsr(MSR_P4_IQ_COUNTER0, -(cpu_khz/nmi_hz*1000), -1);
write_watchdog_counter("P4_IQ_COUNTER0");
apic_write(APIC_LVTPC, APIC_DM_NMI);
wrmsr(MSR_P4_IQ_CCCR0, nmi_p4_cccr_val, 0);
return 1;
@ -518,7 +524,7 @@ void nmi_watchdog_tick (struct pt_regs * regs)
* other P6 variant */
apic_write(APIC_LVTPC, APIC_DM_NMI);
}
wrmsr(nmi_perfctr_msr, -(cpu_khz/nmi_hz*1000), -1);
write_watchdog_counter(NULL);
}
}

View file

@ -11,6 +11,7 @@
#include <linux/mm.h>
#include <linux/string.h>
#include <linux/pci.h>
#include <linux/module.h>
#include <asm/io.h>
struct dma_coherent_mem {
@ -54,6 +55,7 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
}
return ret;
}
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle)
@ -68,6 +70,7 @@ void dma_free_coherent(struct device *dev, size_t size,
} else
free_pages((unsigned long)vaddr, order);
}
EXPORT_SYMBOL(dma_free_coherent);
int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
dma_addr_t device_addr, size_t size, int flags)

View file

@ -37,6 +37,7 @@
#include <linux/kallsyms.h>
#include <linux/ptrace.h>
#include <linux/random.h>
#include <linux/kprobes.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
@ -73,6 +74,7 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
* Powermanagement idle function, if any..
*/
void (*pm_idle)(void);
EXPORT_SYMBOL(pm_idle);
static DEFINE_PER_CPU(unsigned int, cpu_idle_state);
void disable_hlt(void)
@ -105,6 +107,9 @@ void default_idle(void)
cpu_relax();
}
}
#ifdef CONFIG_APM_MODULE
EXPORT_SYMBOL(default_idle);
#endif
/*
* On SMP it's slightly faster (but much more power-consuming!)
@ -262,7 +267,7 @@ void show_regs(struct pt_regs * regs)
printk("EIP: %04x:[<%08lx>] CPU: %d\n",0xffff & regs->xcs,regs->eip, smp_processor_id());
print_symbol("EIP is at %s\n", regs->eip);
if (regs->xcs & 3)
if (user_mode(regs))
printk(" ESP: %04x:%08lx",0xffff & regs->xss,regs->esp);
printk(" EFLAGS: %08lx %s (%s)\n",
regs->eflags, print_tainted(), system_utsname.release);
@ -325,6 +330,7 @@ int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
/* Ok, create the new process.. */
return do_fork(flags | CLONE_VM | CLONE_UNTRACED, 0, &regs, 0, NULL, NULL);
}
EXPORT_SYMBOL(kernel_thread);
/*
* Free current thread data structures etc..
@ -334,6 +340,13 @@ void exit_thread(void)
struct task_struct *tsk = current;
struct thread_struct *t = &tsk->thread;
/*
* Remove function-return probe instances associated with this task
* and put them back on the free list. Do not insert an exit probe for
* this function, it will be disabled by kprobe_flush_task if you do.
*/
kprobe_flush_task(tsk);
/* The process may have allocated an io port bitmap... nuke it. */
if (unlikely(NULL != t->io_bitmap_ptr)) {
int cpu = get_cpu();
@ -357,6 +370,13 @@ void flush_thread(void)
{
struct task_struct *tsk = current;
/*
* Remove function-return probe instances associated with this task
* and put them back on the free list. Do not insert an exit probe for
* this function, it will be disabled by kprobe_flush_task if you do.
*/
kprobe_flush_task(tsk);
memset(tsk->thread.debugreg, 0, sizeof(unsigned long)*8);
memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
/*
@ -508,6 +528,7 @@ void dump_thread(struct pt_regs * regs, struct user * dump)
dump->u_fpvalid = dump_fpu (regs, &dump->i387);
}
EXPORT_SYMBOL(dump_thread);
/*
* Capture the user space registers if the task is not running (in user space)
@ -627,13 +648,13 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
* Now maybe reload the debug registers
*/
if (unlikely(next->debugreg[7])) {
loaddebug(next, 0);
loaddebug(next, 1);
loaddebug(next, 2);
loaddebug(next, 3);
set_debugreg(current->thread.debugreg[0], 0);
set_debugreg(current->thread.debugreg[1], 1);
set_debugreg(current->thread.debugreg[2], 2);
set_debugreg(current->thread.debugreg[3], 3);
/* no 4 and 5 */
loaddebug(next, 6);
loaddebug(next, 7);
set_debugreg(current->thread.debugreg[6], 6);
set_debugreg(current->thread.debugreg[7], 7);
}
if (unlikely(prev->io_bitmap_ptr || next->io_bitmap_ptr))
@ -731,6 +752,7 @@ unsigned long get_wchan(struct task_struct *p)
} while (count++ < 16);
return 0;
}
EXPORT_SYMBOL(get_wchan);
/*
* sys_alloc_thread_area: get a yet unused TLS descriptor index.

View file

@ -668,7 +668,7 @@ void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs, int error_code)
info.si_code = TRAP_BRKPT;
/* User-mode eip? */
info.si_addr = user_mode(regs) ? (void __user *) regs->eip : NULL;
info.si_addr = user_mode_vm(regs) ? (void __user *) regs->eip : NULL;
/* Send us the fakey SIGTRAP */
force_sig_info(SIGTRAP, &info, tsk);

View file

@ -2,6 +2,7 @@
* linux/arch/i386/kernel/reboot.c
*/
#include <linux/config.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/delay.h>
@ -19,6 +20,7 @@
* Power off function, if any
*/
void (*pm_power_off)(void);
EXPORT_SYMBOL(pm_power_off);
static int reboot_mode;
static int reboot_thru_bios;
@ -295,6 +297,9 @@ void machine_real_restart(unsigned char *code, int length)
:
: "i" ((void *) (0x1000 - sizeof (real_mode_switch) - 100)));
}
#ifdef CONFIG_APM_MODULE
EXPORT_SYMBOL(machine_real_restart);
#endif
void machine_restart(char * __unused)
{

View file

@ -23,8 +23,10 @@
* This file handles the architecture-dependent parts of initialization
*/
#include <linux/config.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/mmzone.h>
#include <linux/tty.h>
#include <linux/ioport.h>
#include <linux/acpi.h>
@ -73,6 +75,7 @@ EXPORT_SYMBOL(efi_enabled);
struct cpuinfo_x86 new_cpu_data __initdata = { 0, 0, 0, 0, -1, 1, 0, 0, -1 };
/* common cpu data for all cpus */
struct cpuinfo_x86 boot_cpu_data = { 0, 0, 0, 0, -1, 1, 0, 0, -1 };
EXPORT_SYMBOL(boot_cpu_data);
unsigned long mmu_cr4_features;
@ -90,12 +93,18 @@ extern acpi_interrupt_flags acpi_sci_flags;
/* for MCA, but anyone else can use it if they want */
unsigned int machine_id;
#ifdef CONFIG_MCA
EXPORT_SYMBOL(machine_id);
#endif
unsigned int machine_submodel_id;
unsigned int BIOS_revision;
unsigned int mca_pentium_flag;
/* For PCI or other memory-mapped resources */
unsigned long pci_mem_start = 0x10000000;
#ifdef CONFIG_PCI
EXPORT_SYMBOL(pci_mem_start);
#endif
/* Boot loader ID as an integer, for the benefit of proc_dointvec */
int bootloader_type;
@ -107,14 +116,26 @@ static unsigned int highmem_pages = -1;
* Setup options
*/
struct drive_info_struct { char dummy[32]; } drive_info;
#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_HD) || \
defined(CONFIG_BLK_DEV_IDE_MODULE) || defined(CONFIG_BLK_DEV_HD_MODULE)
EXPORT_SYMBOL(drive_info);
#endif
struct screen_info screen_info;
#ifdef CONFIG_VT
EXPORT_SYMBOL(screen_info);
#endif
struct apm_info apm_info;
EXPORT_SYMBOL(apm_info);
struct sys_desc_table_struct {
unsigned short length;
unsigned char table[0];
};
struct edid_info edid_info;
struct ist_info ist_info;
#if defined(CONFIG_X86_SPEEDSTEP_SMI) || \
defined(CONFIG_X86_SPEEDSTEP_SMI_MODULE)
EXPORT_SYMBOL(ist_info);
#endif
struct e820map e820;
extern void early_cpu_init(void);
@ -1022,7 +1043,7 @@ static void __init reserve_ebda_region(void)
reserve_bootmem(addr, PAGE_SIZE);
}
#ifndef CONFIG_DISCONTIGMEM
#ifndef CONFIG_NEED_MULTIPLE_NODES
void __init setup_bootmem_allocator(void);
static unsigned long __init setup_memory(void)
{
@ -1072,9 +1093,9 @@ void __init zone_sizes_init(void)
free_area_init(zones_size);
}
#else
extern unsigned long setup_memory(void);
extern unsigned long __init setup_memory(void);
extern void zone_sizes_init(void);
#endif /* !CONFIG_DISCONTIGMEM */
#endif /* !CONFIG_NEED_MULTIPLE_NODES */
void __init setup_bootmem_allocator(void)
{
@ -1475,6 +1496,7 @@ void __init setup_arch(char **cmdline_p)
#endif
paging_init();
remapped_pgdat_init();
sparse_init();
zone_sizes_init();
/*

View file

@ -346,8 +346,8 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
extern void __user __kernel_sigreturn;
extern void __user __kernel_rt_sigreturn;
static void setup_frame(int sig, struct k_sigaction *ka,
sigset_t *set, struct pt_regs * regs)
static int setup_frame(int sig, struct k_sigaction *ka,
sigset_t *set, struct pt_regs * regs)
{
void __user *restorer;
struct sigframe __user *frame;
@ -429,13 +429,14 @@ static void setup_frame(int sig, struct k_sigaction *ka,
current->comm, current->pid, frame, regs->eip, frame->pretcode);
#endif
return;
return 1;
give_sigsegv:
force_sigsegv(sig, current);
return 0;
}
static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs * regs)
{
void __user *restorer;
@ -522,20 +523,23 @@ static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
current->comm, current->pid, frame, regs->eip, frame->pretcode);
#endif
return;
return 1;
give_sigsegv:
force_sigsegv(sig, current);
return 0;
}
/*
* OK, we're invoking a handler
*/
static void
static int
handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
sigset_t *oldset, struct pt_regs * regs)
{
int ret;
/* Are we from a system call? */
if (regs->orig_eax >= 0) {
/* If so, check system call restarting.. */
@ -569,17 +573,19 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
/* Set up the stack frame */
if (ka->sa.sa_flags & SA_SIGINFO)
setup_rt_frame(sig, ka, info, oldset, regs);
ret = setup_rt_frame(sig, ka, info, oldset, regs);
else
setup_frame(sig, ka, oldset, regs);
ret = setup_frame(sig, ka, oldset, regs);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
if (ret && !(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
return ret;
}
/*
@ -599,7 +605,7 @@ int fastcall do_signal(struct pt_regs *regs, sigset_t *oldset)
* kernel mode. Just return without doing anything
* if so.
*/
if ((regs->xcs & 3) != 3)
if (!user_mode(regs))
return 1;
if (current->flags & PF_FREEZE) {
@ -618,12 +624,11 @@ int fastcall do_signal(struct pt_regs *regs, sigset_t *oldset)
* inside the kernel.
*/
if (unlikely(current->thread.debugreg[7])) {
loaddebug(&current->thread, 7);
set_debugreg(current->thread.debugreg[7], 7);
}
/* Whee! Actually deliver the signal. */
handle_signal(signr, &info, &ka, oldset, regs);
return 1;
return handle_signal(signr, &info, &ka, oldset, regs);
}
no_signal:

View file

@ -19,6 +19,7 @@
#include <linux/mc146818rtc.h>
#include <linux/cache.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <asm/mtrr.h>
#include <asm/tlbflush.h>
@ -452,6 +453,7 @@ void flush_tlb_page(struct vm_area_struct * vma, unsigned long va)
preempt_enable();
}
EXPORT_SYMBOL(flush_tlb_page);
static void do_flush_tlb_all(void* info)
{
@ -547,6 +549,7 @@ int smp_call_function (void (*func) (void *info), void *info, int nonatomic,
return 0;
}
EXPORT_SYMBOL(smp_call_function);
static void stop_this_cpu (void * dummy)
{

View file

@ -60,6 +60,9 @@ static int __initdata smp_b_stepping;
/* Number of siblings per CPU package */
int smp_num_siblings = 1;
#ifdef CONFIG_X86_HT
EXPORT_SYMBOL(smp_num_siblings);
#endif
int phys_proc_id[NR_CPUS]; /* Package ID of each logical CPU */
EXPORT_SYMBOL(phys_proc_id);
int cpu_core_id[NR_CPUS]; /* Core ID of each logical CPU */
@ -67,13 +70,16 @@ EXPORT_SYMBOL(cpu_core_id);
/* bitmap of online cpus */
cpumask_t cpu_online_map;
EXPORT_SYMBOL(cpu_online_map);
cpumask_t cpu_callin_map;
cpumask_t cpu_callout_map;
EXPORT_SYMBOL(cpu_callout_map);
static cpumask_t smp_commenced_mask;
/* Per CPU bogomips and other parameters */
struct cpuinfo_x86 cpu_data[NR_CPUS] __cacheline_aligned;
EXPORT_SYMBOL(cpu_data);
u8 x86_cpu_to_apicid[NR_CPUS] =
{ [0 ... NR_CPUS-1] = 0xff };
@ -199,7 +205,7 @@ static void __init synchronize_tsc_bp (void)
unsigned long long t0;
unsigned long long sum, avg;
long long delta;
unsigned long one_usec;
unsigned int one_usec;
int buggy = 0;
printk(KERN_INFO "checking TSC synchronization across %u CPUs: ", num_booting_cpus());
@ -885,8 +891,14 @@ static void smp_tune_scheduling (void)
static int boot_cpu_logical_apicid;
/* Where the IO area was mapped on multiquad, always 0 otherwise */
void *xquad_portio;
#ifdef CONFIG_X86_NUMAQ
EXPORT_SYMBOL(xquad_portio);
#endif
cpumask_t cpu_sibling_map[NR_CPUS] __cacheline_aligned;
#ifdef CONFIG_X86_HT
EXPORT_SYMBOL(cpu_sibling_map);
#endif
cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned;
EXPORT_SYMBOL(cpu_core_map);

View file

@ -77,11 +77,13 @@ u64 jiffies_64 = INITIAL_JIFFIES;
EXPORT_SYMBOL(jiffies_64);
unsigned long cpu_khz; /* Detected as we calibrate the TSC */
unsigned int cpu_khz; /* Detected as we calibrate the TSC */
EXPORT_SYMBOL(cpu_khz);
extern unsigned long wall_jiffies;
DEFINE_SPINLOCK(rtc_lock);
EXPORT_SYMBOL(rtc_lock);
DEFINE_SPINLOCK(i8253_lock);
EXPORT_SYMBOL(i8253_lock);
@ -324,6 +326,8 @@ unsigned long get_cmos_time(void)
return retval;
}
EXPORT_SYMBOL(get_cmos_time);
static void sync_cmos_clock(unsigned long dummy);
static struct timer_list sync_cmos_timer =

View file

@ -139,6 +139,15 @@ bad_calibration:
}
#endif
unsigned long read_timer_tsc(void)
{
unsigned long retval;
rdtscl(retval);
return retval;
}
/* calculate cpu_khz */
void init_cpu_khz(void)
{
@ -154,7 +163,8 @@ void init_cpu_khz(void)
:"=a" (cpu_khz), "=d" (edx)
:"r" (tsc_quotient),
"0" (eax), "1" (edx));
printk("Detected %lu.%03lu MHz processor.\n", cpu_khz / 1000, cpu_khz % 1000);
printk("Detected %u.%03u MHz processor.\n",
cpu_khz / 1000, cpu_khz % 1000);
}
}
}

View file

@ -64,3 +64,12 @@ struct timer_opts* __init select_timer(void)
panic("select_timer: Cannot find a suitable timer\n");
return NULL;
}
int read_current_timer(unsigned long *timer_val)
{
if (cur_timer->read_timer) {
*timer_val = cur_timer->read_timer();
return 0;
}
return -1;
}

View file

@ -158,7 +158,7 @@ static int __init init_hpet(char* override)
{ unsigned long eax=0, edx=1000;
ASM_DIV64_REG(cpu_khz, edx, tsc_quotient,
eax, edx);
printk("Detected %lu.%03lu MHz processor.\n",
printk("Detected %u.%03u MHz processor.\n",
cpu_khz / 1000, cpu_khz % 1000);
}
set_cyc2ns_scale(cpu_khz/1000);
@ -186,6 +186,7 @@ static struct timer_opts timer_hpet = {
.get_offset = get_offset_hpet,
.monotonic_clock = monotonic_clock_hpet,
.delay = delay_hpet,
.read_timer = read_timer_tsc,
};
struct init_timer_opts __initdata timer_hpet_init = {

View file

@ -246,6 +246,7 @@ static struct timer_opts timer_pmtmr = {
.get_offset = get_offset_pmtmr,
.monotonic_clock = monotonic_clock_pmtmr,
.delay = delay_pmtmr,
.read_timer = read_timer_tsc,
};
struct init_timer_opts __initdata timer_pmtmr_init = {

View file

@ -256,7 +256,7 @@ static unsigned long loops_per_jiffy_ref = 0;
#ifndef CONFIG_SMP
static unsigned long fast_gettimeoffset_ref = 0;
static unsigned long cpu_khz_ref = 0;
static unsigned int cpu_khz_ref = 0;
#endif
static int
@ -323,7 +323,7 @@ static inline void cpufreq_delayed_get(void) { return; }
int recalibrate_cpu_khz(void)
{
#ifndef CONFIG_SMP
unsigned long cpu_khz_old = cpu_khz;
unsigned int cpu_khz_old = cpu_khz;
if (cpu_has_tsc) {
init_cpu_khz();
@ -534,7 +534,8 @@ static int __init init_tsc(char* override)
:"=a" (cpu_khz), "=d" (edx)
:"r" (tsc_quotient),
"0" (eax), "1" (edx));
printk("Detected %lu.%03lu MHz processor.\n", cpu_khz / 1000, cpu_khz % 1000);
printk("Detected %u.%03u MHz processor.\n",
cpu_khz / 1000, cpu_khz % 1000);
}
set_cyc2ns_scale(cpu_khz/1000);
return 0;
@ -572,6 +573,7 @@ static struct timer_opts timer_tsc = {
.get_offset = get_offset_tsc,
.monotonic_clock = monotonic_clock_tsc,
.delay = delay_tsc,
.read_timer = read_timer_tsc,
};
struct init_timer_opts __initdata timer_tsc_init = {

View file

@ -104,6 +104,7 @@ int register_die_notifier(struct notifier_block *nb)
spin_unlock_irqrestore(&die_notifier_lock, flags);
return err;
}
EXPORT_SYMBOL(register_die_notifier);
static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
{
@ -209,7 +210,7 @@ void show_registers(struct pt_regs *regs)
esp = (unsigned long) (&regs->esp);
ss = __KERNEL_DS;
if (regs->xcs & 3) {
if (user_mode(regs)) {
in_kernel = 0;
esp = regs->esp;
ss = regs->xss & 0xffff;
@ -265,7 +266,7 @@ static void handle_BUG(struct pt_regs *regs)
char c;
unsigned long eip;
if (regs->xcs & 3)
if (user_mode(regs))
goto no_bug; /* Not in kernel */
eip = regs->eip;
@ -353,7 +354,7 @@ void die(const char * str, struct pt_regs * regs, long err)
static inline void die_if_kernel(const char * str, struct pt_regs * regs, long err)
{
if (!(regs->eflags & VM_MASK) && !(3 & regs->xcs))
if (!user_mode_vm(regs))
die(str, regs, err);
}
@ -366,7 +367,7 @@ static void do_trap(int trapnr, int signr, char *str, int vm86,
goto trap_signal;
}
if (!(regs->xcs & 3))
if (!user_mode(regs))
goto kernel_trap;
trap_signal: {
@ -488,7 +489,7 @@ fastcall void do_general_protection(struct pt_regs * regs, long error_code)
if (regs->eflags & VM_MASK)
goto gp_in_vm86;
if (!(regs->xcs & 3))
if (!user_mode(regs))
goto gp_in_kernel;
current->thread.error_code = error_code;
@ -636,11 +637,13 @@ void set_nmi_callback(nmi_callback_t callback)
{
nmi_callback = callback;
}
EXPORT_SYMBOL_GPL(set_nmi_callback);
void unset_nmi_callback(void)
{
nmi_callback = dummy_nmi_callback;
}
EXPORT_SYMBOL_GPL(unset_nmi_callback);
#ifdef CONFIG_KPROBES
fastcall void do_int3(struct pt_regs *regs, long error_code)
@ -682,7 +685,7 @@ fastcall void do_debug(struct pt_regs * regs, long error_code)
unsigned int condition;
struct task_struct *tsk = current;
__asm__ __volatile__("movl %%db6,%0" : "=r" (condition));
get_debugreg(condition, 6);
if (notify_die(DIE_DEBUG, "debug", regs, condition, error_code,
SIGTRAP) == NOTIFY_STOP)
@ -713,7 +716,7 @@ fastcall void do_debug(struct pt_regs * regs, long error_code)
* check for kernel mode by just checking the CPL
* of CS.
*/
if ((regs->xcs & 3) == 0)
if (!user_mode(regs))
goto clear_TF_reenable;
}
@ -724,9 +727,7 @@ fastcall void do_debug(struct pt_regs * regs, long error_code)
* the signal is delivered.
*/
clear_dr7:
__asm__("movl %0,%%db7"
: /* no output */
: "r" (0));
set_debugreg(0, 7);
return;
debug_vm86:

View file

@ -8,6 +8,7 @@
*/
#include <linux/spinlock.h>
#include <linux/module.h>
#include <asm/atomic.h>
int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
@ -38,3 +39,4 @@ slow_path:
spin_unlock(lock);
return 0;
}
EXPORT_SYMBOL(_atomic_dec_and_lock);

View file

@ -13,6 +13,7 @@
#include <linux/config.h>
#include <linux/sched.h>
#include <linux/delay.h>
#include <linux/module.h>
#include <asm/processor.h>
#include <asm/delay.h>
#include <asm/timer.h>
@ -47,3 +48,8 @@ void __ndelay(unsigned long nsecs)
{
__const_udelay(nsecs * 0x00005); /* 2**32 / 1000000000 (rounded up) */
}
EXPORT_SYMBOL(__delay);
EXPORT_SYMBOL(__const_udelay);
EXPORT_SYMBOL(__udelay);
EXPORT_SYMBOL(__ndelay);

View file

@ -3,6 +3,7 @@
#include <linux/string.h>
#include <linux/sched.h>
#include <linux/hardirq.h>
#include <linux/module.h>
#include <asm/i387.h>
@ -397,3 +398,7 @@ void mmx_copy_page(void *to, void *from)
else
fast_copy_page(to, from);
}
EXPORT_SYMBOL(_mmx_memcpy);
EXPORT_SYMBOL(mmx_clear_page);
EXPORT_SYMBOL(mmx_copy_page);

View file

@ -84,6 +84,7 @@ __strncpy_from_user(char *dst, const char __user *src, long count)
__do_strncpy_from_user(dst, src, count, res);
return res;
}
EXPORT_SYMBOL(__strncpy_from_user);
/**
* strncpy_from_user: - Copy a NUL terminated string from userspace.
@ -111,7 +112,7 @@ strncpy_from_user(char *dst, const char __user *src, long count)
__do_strncpy_from_user(dst, src, count, res);
return res;
}
EXPORT_SYMBOL(strncpy_from_user);
/*
* Zero Userspace
@ -157,6 +158,7 @@ clear_user(void __user *to, unsigned long n)
__do_clear_user(to, n);
return n;
}
EXPORT_SYMBOL(clear_user);
/**
* __clear_user: - Zero a block of memory in user space, with less checking.
@ -175,6 +177,7 @@ __clear_user(void __user *to, unsigned long n)
__do_clear_user(to, n);
return n;
}
EXPORT_SYMBOL(__clear_user);
/**
* strlen_user: - Get the size of a string in user space.
@ -218,6 +221,7 @@ long strnlen_user(const char __user *s, long n)
:"cc");
return res & mask;
}
EXPORT_SYMBOL(strnlen_user);
#ifdef CONFIG_X86_INTEL_USERCOPY
static unsigned long
@ -570,6 +574,7 @@ survive:
n = __copy_user_intel(to, from, n);
return n;
}
EXPORT_SYMBOL(__copy_to_user_ll);
unsigned long
__copy_from_user_ll(void *to, const void __user *from, unsigned long n)
@ -581,6 +586,7 @@ __copy_from_user_ll(void *to, const void __user *from, unsigned long n)
n = __copy_user_zeroing_intel(to, from, n);
return n;
}
EXPORT_SYMBOL(__copy_from_user_ll);
/**
* copy_to_user: - Copy a block of data into user space.

View file

@ -1288,7 +1288,7 @@ smp_local_timer_interrupt(struct pt_regs * regs)
per_cpu(prof_counter, cpu);
}
update_process_times(user_mode(regs));
update_process_times(user_mode_vm(regs));
}
if( ((1<<cpu) & voyager_extended_vic_processors) == 0)

View file

@ -4,7 +4,7 @@
obj-y := init.o pgtable.o fault.o ioremap.o extable.o pageattr.o mmap.o
obj-$(CONFIG_DISCONTIGMEM) += discontig.o
obj-$(CONFIG_NUMA) += discontig.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_HIGHMEM) += highmem.o
obj-$(CONFIG_BOOT_IOREMAP) += boot_ioremap.o

View file

@ -29,12 +29,14 @@
#include <linux/highmem.h>
#include <linux/initrd.h>
#include <linux/nodemask.h>
#include <linux/module.h>
#include <asm/e820.h>
#include <asm/setup.h>
#include <asm/mmzone.h>
#include <bios_ebda.h>
struct pglist_data *node_data[MAX_NUMNODES];
EXPORT_SYMBOL(node_data);
bootmem_data_t node0_bdata;
/*
@ -42,12 +44,16 @@ bootmem_data_t node0_bdata;
* populated the following initialisation.
*
* 1) node_online_map - the map of all nodes configured (online) in the system
* 2) physnode_map - the mapping between a pfn and owning node
* 3) node_start_pfn - the starting page frame number for a node
* 2) node_start_pfn - the starting page frame number for a node
* 3) node_end_pfn - the ending page fram number for a node
*/
unsigned long node_start_pfn[MAX_NUMNODES];
unsigned long node_end_pfn[MAX_NUMNODES];
#ifdef CONFIG_DISCONTIGMEM
/*
* 4) physnode_map - the mapping between a pfn and owning node
* physnode_map keeps track of the physical memory layout of a generic
* numa node on a 256Mb break (each element of the array will
* represent 256Mb of memory and will be marked by the node id. so,
@ -59,6 +65,7 @@ bootmem_data_t node0_bdata;
* physnode_map[8- ] = -1;
*/
s8 physnode_map[MAX_ELEMENTS] = { [0 ... (MAX_ELEMENTS - 1)] = -1};
EXPORT_SYMBOL(physnode_map);
void memory_present(int nid, unsigned long start, unsigned long end)
{
@ -85,9 +92,7 @@ unsigned long node_memmap_size_bytes(int nid, unsigned long start_pfn,
return (nr_pages + 1) * sizeof(struct page);
}
unsigned long node_start_pfn[MAX_NUMNODES];
unsigned long node_end_pfn[MAX_NUMNODES];
#endif
extern unsigned long find_max_low_pfn(void);
extern void find_max_pfn(void);
@ -108,6 +113,9 @@ unsigned long node_remap_offset[MAX_NUMNODES];
void *node_remap_start_vaddr[MAX_NUMNODES];
void set_pmd_pfn(unsigned long vaddr, unsigned long pfn, pgprot_t flags);
void *node_remap_end_vaddr[MAX_NUMNODES];
void *node_remap_alloc_vaddr[MAX_NUMNODES];
/*
* FLAT - support for basic PC memory model with discontig enabled, essentially
* a single node with all available processors in it with a flat
@ -146,6 +154,21 @@ static void __init find_max_pfn_node(int nid)
BUG();
}
/* Find the owning node for a pfn. */
int early_pfn_to_nid(unsigned long pfn)
{
int nid;
for_each_node(nid) {
if (node_end_pfn[nid] == 0)
break;
if (node_start_pfn[nid] <= pfn && node_end_pfn[nid] >= pfn)
return nid;
}
return 0;
}
/*
* Allocate memory for the pg_data_t for this node via a crude pre-bootmem
* method. For node zero take this from the bottom of memory, for
@ -163,6 +186,21 @@ static void __init allocate_pgdat(int nid)
}
}
void *alloc_remap(int nid, unsigned long size)
{
void *allocation = node_remap_alloc_vaddr[nid];
size = ALIGN(size, L1_CACHE_BYTES);
if (!allocation || (allocation + size) >= node_remap_end_vaddr[nid])
return 0;
node_remap_alloc_vaddr[nid] += size;
memset(allocation, 0, size);
return allocation;
}
void __init remap_numa_kva(void)
{
void *vaddr;
@ -170,8 +208,6 @@ void __init remap_numa_kva(void)
int node;
for_each_online_node(node) {
if (node == 0)
continue;
for (pfn=0; pfn < node_remap_size[node]; pfn += PTRS_PER_PTE) {
vaddr = node_remap_start_vaddr[node]+(pfn<<PAGE_SHIFT);
set_pmd_pfn((ulong) vaddr,
@ -185,13 +221,9 @@ static unsigned long calculate_numa_remap_pages(void)
{
int nid;
unsigned long size, reserve_pages = 0;
unsigned long pfn;
for_each_online_node(nid) {
if (nid == 0)
continue;
if (!node_remap_size[nid])
continue;
/*
* The acpi/srat node info can show hot-add memroy zones
* where memory could be added but not currently present.
@ -208,11 +240,24 @@ static unsigned long calculate_numa_remap_pages(void)
size = (size + LARGE_PAGE_BYTES - 1) / LARGE_PAGE_BYTES;
/* now the roundup is correct, convert to PAGE_SIZE pages */
size = size * PTRS_PER_PTE;
/*
* Validate the region we are allocating only contains valid
* pages.
*/
for (pfn = node_end_pfn[nid] - size;
pfn < node_end_pfn[nid]; pfn++)
if (!page_is_ram(pfn))
break;
if (pfn != node_end_pfn[nid])
size = 0;
printk("Reserving %ld pages of KVA for lmem_map of node %d\n",
size, nid);
node_remap_size[nid] = size;
reserve_pages += size;
node_remap_offset[nid] = reserve_pages;
reserve_pages += size;
printk("Shrinking node %d from %ld pages to %ld pages\n",
nid, node_end_pfn[nid], node_end_pfn[nid] - size);
node_end_pfn[nid] -= size;
@ -265,12 +310,18 @@ unsigned long __init setup_memory(void)
(ulong) pfn_to_kaddr(max_low_pfn));
for_each_online_node(nid) {
node_remap_start_vaddr[nid] = pfn_to_kaddr(
(highstart_pfn + reserve_pages) - node_remap_offset[nid]);
highstart_pfn + node_remap_offset[nid]);
/* Init the node remap allocator */
node_remap_end_vaddr[nid] = node_remap_start_vaddr[nid] +
(node_remap_size[nid] * PAGE_SIZE);
node_remap_alloc_vaddr[nid] = node_remap_start_vaddr[nid] +
ALIGN(sizeof(pg_data_t), PAGE_SIZE);
allocate_pgdat(nid);
printk ("node %d will remap to vaddr %08lx - %08lx\n", nid,
(ulong) node_remap_start_vaddr[nid],
(ulong) pfn_to_kaddr(highstart_pfn + reserve_pages
- node_remap_offset[nid] + node_remap_size[nid]));
(ulong) pfn_to_kaddr(highstart_pfn
+ node_remap_offset[nid] + node_remap_size[nid]));
}
printk("High memory starts at vaddr %08lx\n",
(ulong) pfn_to_kaddr(highstart_pfn));
@ -333,23 +384,9 @@ void __init zone_sizes_init(void)
}
zholes_size = get_zholes_size(nid);
/*
* We let the lmem_map for node 0 be allocated from the
* normal bootmem allocator, but other nodes come from the
* remapped KVA area - mbligh
*/
if (!nid)
free_area_init_node(nid, NODE_DATA(nid),
zones_size, start, zholes_size);
else {
unsigned long lmem_map;
lmem_map = (unsigned long)node_remap_start_vaddr[nid];
lmem_map += sizeof(pg_data_t) + PAGE_SIZE - 1;
lmem_map &= PAGE_MASK;
NODE_DATA(nid)->node_mem_map = (struct page *)lmem_map;
free_area_init_node(nid, NODE_DATA(nid), zones_size,
start, zholes_size);
}
free_area_init_node(nid, NODE_DATA(nid), zones_size, start,
zholes_size);
}
return;
}
@ -358,24 +395,26 @@ void __init set_highmem_pages_init(int bad_ppro)
{
#ifdef CONFIG_HIGHMEM
struct zone *zone;
struct page *page;
for_each_zone(zone) {
unsigned long node_pfn, node_high_size, zone_start_pfn;
struct page * zone_mem_map;
unsigned long node_pfn, zone_start_pfn, zone_end_pfn;
if (!is_highmem(zone))
continue;
printk("Initializing %s for node %d\n", zone->name,
zone->zone_pgdat->node_id);
node_high_size = zone->spanned_pages;
zone_mem_map = zone->zone_mem_map;
zone_start_pfn = zone->zone_start_pfn;
zone_end_pfn = zone_start_pfn + zone->spanned_pages;
for (node_pfn = 0; node_pfn < node_high_size; node_pfn++) {
one_highpage_init((struct page *)(zone_mem_map + node_pfn),
zone_start_pfn + node_pfn, bad_ppro);
printk("Initializing %s for node %d (%08lx:%08lx)\n",
zone->name, zone->zone_pgdat->node_id,
zone_start_pfn, zone_end_pfn);
for (node_pfn = zone_start_pfn; node_pfn < zone_end_pfn; node_pfn++) {
if (!pfn_valid(node_pfn))
continue;
page = pfn_to_page(node_pfn);
one_highpage_init(page, node_pfn, bad_ppro);
}
}
totalram_pages += totalhigh_pages;

View file

@ -1,4 +1,5 @@
#include <linux/highmem.h>
#include <linux/module.h>
void *kmap(struct page *page)
{
@ -87,3 +88,8 @@ struct page *kmap_atomic_to_page(void *ptr)
return pte_page(*pte);
}
EXPORT_SYMBOL(kmap);
EXPORT_SYMBOL(kunmap);
EXPORT_SYMBOL(kmap_atomic);
EXPORT_SYMBOL(kunmap_atomic);
EXPORT_SYMBOL(kmap_atomic_to_page);

View file

@ -191,7 +191,7 @@ static inline int page_kills_ppro(unsigned long pagenr)
extern int is_available_memory(efi_memory_desc_t *);
static inline int page_is_ram(unsigned long pagenr)
int page_is_ram(unsigned long pagenr)
{
int i;
unsigned long addr, end;
@ -276,7 +276,9 @@ void __init one_highpage_init(struct page *page, int pfn, int bad_ppro)
SetPageReserved(page);
}
#ifndef CONFIG_DISCONTIGMEM
#ifdef CONFIG_NUMA
extern void set_highmem_pages_init(int);
#else
static void __init set_highmem_pages_init(int bad_ppro)
{
int pfn;
@ -284,9 +286,7 @@ static void __init set_highmem_pages_init(int bad_ppro)
one_highpage_init(pfn_to_page(pfn), pfn, bad_ppro);
totalram_pages += totalhigh_pages;
}
#else
extern void set_highmem_pages_init(int);
#endif /* !CONFIG_DISCONTIGMEM */
#endif /* CONFIG_FLATMEM */
#else
#define kmap_init() do { } while (0)
@ -295,12 +295,13 @@ extern void set_highmem_pages_init(int);
#endif /* CONFIG_HIGHMEM */
unsigned long long __PAGE_KERNEL = _PAGE_KERNEL;
EXPORT_SYMBOL(__PAGE_KERNEL);
unsigned long long __PAGE_KERNEL_EXEC = _PAGE_KERNEL_EXEC;
#ifndef CONFIG_DISCONTIGMEM
#define remap_numa_kva() do {} while (0)
#else
#ifdef CONFIG_NUMA
extern void __init remap_numa_kva(void);
#else
#define remap_numa_kva() do {} while (0)
#endif
static void __init pagetable_init (void)
@ -525,7 +526,7 @@ static void __init set_max_mapnr_init(void)
#else
num_physpages = max_low_pfn;
#endif
#ifndef CONFIG_DISCONTIGMEM
#ifdef CONFIG_FLATMEM
max_mapnr = num_physpages;
#endif
}
@ -539,7 +540,7 @@ void __init mem_init(void)
int tmp;
int bad_ppro;
#ifndef CONFIG_DISCONTIGMEM
#ifdef CONFIG_FLATMEM
if (!mem_map)
BUG();
#endif

View file

@ -11,6 +11,7 @@
#include <linux/vmalloc.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <asm/io.h>
#include <asm/fixmap.h>
#include <asm/cacheflush.h>
@ -165,7 +166,7 @@ void __iomem * __ioremap(unsigned long phys_addr, unsigned long size, unsigned l
}
return (void __iomem *) (offset + (char __iomem *)addr);
}
EXPORT_SYMBOL(__ioremap);
/**
* ioremap_nocache - map bus memory into CPU space
@ -222,6 +223,7 @@ void __iomem *ioremap_nocache (unsigned long phys_addr, unsigned long size)
return p;
}
EXPORT_SYMBOL(ioremap_nocache);
void iounmap(volatile void __iomem *addr)
{
@ -255,6 +257,7 @@ out_unlock:
write_unlock(&vmlist_lock);
kfree(p);
}
EXPORT_SYMBOL(iounmap);
void __init *bt_ioremap(unsigned long phys_addr, unsigned long size)
{

View file

@ -30,13 +30,14 @@ void show_mem(void)
struct page *page;
pg_data_t *pgdat;
unsigned long i;
struct page_state ps;
printk("Mem-info:\n");
show_free_areas();
printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
for_each_pgdat(pgdat) {
for (i = 0; i < pgdat->node_spanned_pages; ++i) {
page = pgdat->node_mem_map + i;
page = pgdat_page_nr(pgdat, i);
total++;
if (PageHighMem(page))
highmem++;
@ -53,6 +54,13 @@ void show_mem(void)
printk("%d reserved pages\n",reserved);
printk("%d pages shared\n",shared);
printk("%d pages swap cached\n",cached);
get_page_state(&ps);
printk("%lu pages dirty\n", ps.nr_dirty);
printk("%lu pages writeback\n", ps.nr_writeback);
printk("%lu pages mapped\n", ps.nr_mapped);
printk("%lu pages slab\n", ps.nr_slab);
printk("%lu pages pagetables\n", ps.nr_page_table_pages);
}
/*

View file

@ -91,7 +91,7 @@ x86_backtrace(struct pt_regs * const regs, unsigned int depth)
head = (struct frame_head *)regs->ebp;
#endif
if (!user_mode(regs)) {
if (!user_mode_vm(regs)) {
while (depth-- && valid_kernel_stack(head, regs))
head = dump_backtrace(head);
return;

View file

@ -226,6 +226,24 @@ static int pirq_via_set(struct pci_dev *router, struct pci_dev *dev, int pirq, i
return 1;
}
/*
* The VIA pirq rules are nibble-based, like ALI,
* but without the ugly irq number munging.
* However, for 82C586, nibble map is different .
*/
static int pirq_via586_get(struct pci_dev *router, struct pci_dev *dev, int pirq)
{
static unsigned int pirqmap[4] = { 3, 2, 5, 1 };
return read_config_nybble(router, 0x55, pirqmap[pirq-1]);
}
static int pirq_via586_set(struct pci_dev *router, struct pci_dev *dev, int pirq, int irq)
{
static unsigned int pirqmap[4] = { 3, 2, 5, 1 };
write_config_nybble(router, 0x55, pirqmap[pirq-1], irq);
return 1;
}
/*
* ITE 8330G pirq rules are nibble-based
* FIXME: pirqmap may be { 1, 0, 3, 2 },
@ -512,6 +530,10 @@ static __init int via_router_probe(struct irq_router *r, struct pci_dev *router,
switch(device)
{
case PCI_DEVICE_ID_VIA_82C586_0:
r->name = "VIA";
r->get = pirq_via586_get;
r->set = pirq_via586_set;
return 1;
case PCI_DEVICE_ID_VIA_82C596:
case PCI_DEVICE_ID_VIA_82C686:
case PCI_DEVICE_ID_VIA_8231:

View file

@ -4,6 +4,7 @@
#include <linux/pci.h>
#include <linux/init.h>
#include <linux/module.h>
#include "pci.h"
#include "pci-functions.h"
@ -456,7 +457,7 @@ struct irq_routing_table * __devinit pcibios_get_irq_routing_table(void)
free_page(page);
return rt;
}
EXPORT_SYMBOL(pcibios_get_irq_routing_table);
int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq)
{
@ -473,6 +474,7 @@ int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq)
"S" (&pci_indirect));
return !(ret & 0xff00);
}
EXPORT_SYMBOL(pcibios_set_irq_routing);
static int __init pci_pcbios_init(void)
{

View file

@ -94,13 +94,13 @@ static void fix_processor_context(void)
* Now maybe reload the debug registers
*/
if (current->thread.debugreg[7]){
loaddebug(&current->thread, 0);
loaddebug(&current->thread, 1);
loaddebug(&current->thread, 2);
loaddebug(&current->thread, 3);
/* no 4 and 5 */
loaddebug(&current->thread, 6);
loaddebug(&current->thread, 7);
set_debugreg(current->thread.debugreg[0], 0);
set_debugreg(current->thread.debugreg[1], 1);
set_debugreg(current->thread.debugreg[2], 2);
set_debugreg(current->thread.debugreg[3], 3);
/* no 4 and 5 */
set_debugreg(current->thread.debugreg[6], 6);
set_debugreg(current->thread.debugreg[7], 7);
}
}

View file

@ -161,6 +161,8 @@ config IA64_PAGE_SIZE_64KB
endchoice
source kernel/Kconfig.hz
config IA64_BRL_EMU
bool
depends on ITANIUM
@ -197,7 +199,7 @@ config HOLES_IN_ZONE
bool
default y if VIRTUAL_MEM_MAP
config DISCONTIGMEM
config ARCH_DISCONTIGMEM_ENABLE
bool "Discontiguous memory support"
depends on (IA64_DIG || IA64_SGI_SN2 || IA64_GENERIC || IA64_HP_ZX1 || IA64_HP_ZX1_SWIOTLB) && NUMA && VIRTUAL_MEM_MAP
default y if (IA64_SGI_SN2 || IA64_GENERIC) && NUMA
@ -300,6 +302,8 @@ config PREEMPT
Say Y here if you are building a kernel for a desktop, embedded
or real-time system. Say N if you are unsure.
source "mm/Kconfig"
config HAVE_DEC_LOCK
bool
depends on (SMP || PREEMPT)

View file

@ -2,6 +2,17 @@ menu "Kernel hacking"
source "lib/Kconfig.debug"
config KPROBES
bool "Kprobes"
depends on DEBUG_KERNEL
help
Kprobes allows you to trap at almost any kernel address and
execute a callback function. register_kprobe() establishes
a probepoint and specifies the callback. Kprobes is useful
for kernel debugging, non-intrusive instrumentation and testing.
If in doubt, say "N".
choice
prompt "Physical memory granularity"
default IA64_GRANULE_64MB

View file

@ -78,7 +78,7 @@ CONFIG_IA64_L1_CACHE_SHIFT=7
CONFIG_NUMA=y
CONFIG_VIRTUAL_MEM_MAP=y
CONFIG_HOLES_IN_ZONE=y
CONFIG_DISCONTIGMEM=y
CONFIG_ARCH_DISCONTIGMEM_ENABLE=y
# CONFIG_IA64_CYCLONE is not set
CONFIG_IOSAPIC=y
CONFIG_IA64_SGI_SN_SIM=y

View file

@ -84,7 +84,7 @@ CONFIG_IA64_L1_CACHE_SHIFT=7
CONFIG_NUMA=y
CONFIG_VIRTUAL_MEM_MAP=y
CONFIG_HOLES_IN_ZONE=y
CONFIG_DISCONTIGMEM=y
CONFIG_ARCH_DISCONTIGMEM_ENABLE=y
CONFIG_IA64_CYCLONE=y
CONFIG_IOSAPIC=y
CONFIG_FORCE_MAX_ZONEORDER=18

View file

@ -241,7 +241,7 @@ typedef struct compat_siginfo {
/* POSIX.1b timers */
struct {
timer_t _tid; /* timer id */
compat_timer_t _tid; /* timer id */
int _overrun; /* overrun count */
char _pad[sizeof(unsigned int) - sizeof(int)];
compat_sigval_t _sigval; /* same as below */

View file

@ -20,6 +20,7 @@ obj-$(CONFIG_SMP) += smp.o smpboot.o domain.o
obj-$(CONFIG_PERFMON) += perfmon_default_smpl.o
obj-$(CONFIG_IA64_CYCLONE) += cyclone.o
obj-$(CONFIG_IA64_MCA_RECOVERY) += mca_recovery.o
obj-$(CONFIG_KPROBES) += kprobes.o jprobes.o
obj-$(CONFIG_IA64_UNCACHED_ALLOCATOR) += uncached.o
mca_recovery-y += mca_drv.o mca_drv_asm.o

View file

@ -0,0 +1,61 @@
/*
* Jprobe specific operations
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* Copyright (C) Intel Corporation, 2005
*
* 2005-May Rusty Lynch <rusty.lynch@intel.com> and Anil S Keshavamurthy
* <anil.s.keshavamurthy@intel.com> initial implementation
*
* Jprobes (a.k.a. "jump probes" which is built on-top of kprobes) allow a
* probe to be inserted into the beginning of a function call. The fundamental
* difference between a jprobe and a kprobe is the jprobe handler is executed
* in the same context as the target function, while the kprobe handlers
* are executed in interrupt context.
*
* For jprobes we initially gain control by placing a break point in the
* first instruction of the targeted function. When we catch that specific
* break, we:
* * set the return address to our jprobe_inst_return() function
* * jump to the jprobe handler function
*
* Since we fixed up the return address, the jprobe handler will return to our
* jprobe_inst_return() function, giving us control again. At this point we
* are back in the parents frame marker, so we do yet another call to our
* jprobe_break() function to fix up the frame marker as it would normally
* exist in the target function.
*
* Our jprobe_return function then transfers control back to kprobes.c by
* executing a break instruction using one of our reserved numbers. When we
* catch that break in kprobes.c, we continue like we do for a normal kprobe
* by single stepping the emulated instruction, and then returning execution
* to the correct location.
*/
#include <asm/asmmacro.h>
/*
* void jprobe_break(void)
*/
ENTRY(jprobe_break)
break.m 0x80300
END(jprobe_break)
/*
* void jprobe_inst_return(void)
*/
GLOBAL_ENTRY(jprobe_inst_return)
br.call.sptk.many b0=jprobe_break
END(jprobe_inst_return)

601
arch/ia64/kernel/kprobes.c Normal file
View file

@ -0,0 +1,601 @@
/*
* Kernel Probes (KProbes)
* arch/ia64/kernel/kprobes.c
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* Copyright (C) IBM Corporation, 2002, 2004
* Copyright (C) Intel Corporation, 2005
*
* 2005-Apr Rusty Lynch <rusty.lynch@intel.com> and Anil S Keshavamurthy
* <anil.s.keshavamurthy@intel.com> adapted from i386
*/
#include <linux/config.h>
#include <linux/kprobes.h>
#include <linux/ptrace.h>
#include <linux/spinlock.h>
#include <linux/string.h>
#include <linux/slab.h>
#include <linux/preempt.h>
#include <linux/moduleloader.h>
#include <asm/pgtable.h>
#include <asm/kdebug.h>
extern void jprobe_inst_return(void);
/* kprobe_status settings */
#define KPROBE_HIT_ACTIVE 0x00000001
#define KPROBE_HIT_SS 0x00000002
static struct kprobe *current_kprobe, *kprobe_prev;
static unsigned long kprobe_status, kprobe_status_prev;
static struct pt_regs jprobe_saved_regs;
enum instruction_type {A, I, M, F, B, L, X, u};
static enum instruction_type bundle_encoding[32][3] = {
{ M, I, I }, /* 00 */
{ M, I, I }, /* 01 */
{ M, I, I }, /* 02 */
{ M, I, I }, /* 03 */
{ M, L, X }, /* 04 */
{ M, L, X }, /* 05 */
{ u, u, u }, /* 06 */
{ u, u, u }, /* 07 */
{ M, M, I }, /* 08 */
{ M, M, I }, /* 09 */
{ M, M, I }, /* 0A */
{ M, M, I }, /* 0B */
{ M, F, I }, /* 0C */
{ M, F, I }, /* 0D */
{ M, M, F }, /* 0E */
{ M, M, F }, /* 0F */
{ M, I, B }, /* 10 */
{ M, I, B }, /* 11 */
{ M, B, B }, /* 12 */
{ M, B, B }, /* 13 */
{ u, u, u }, /* 14 */
{ u, u, u }, /* 15 */
{ B, B, B }, /* 16 */
{ B, B, B }, /* 17 */
{ M, M, B }, /* 18 */
{ M, M, B }, /* 19 */
{ u, u, u }, /* 1A */
{ u, u, u }, /* 1B */
{ M, F, B }, /* 1C */
{ M, F, B }, /* 1D */
{ u, u, u }, /* 1E */
{ u, u, u }, /* 1F */
};
/*
* In this function we check to see if the instruction
* is IP relative instruction and update the kprobe
* inst flag accordingly
*/
static void update_kprobe_inst_flag(uint template, uint slot, uint major_opcode,
unsigned long kprobe_inst, struct kprobe *p)
{
p->ainsn.inst_flag = 0;
p->ainsn.target_br_reg = 0;
if (bundle_encoding[template][slot] == B) {
switch (major_opcode) {
case INDIRECT_CALL_OPCODE:
p->ainsn.inst_flag |= INST_FLAG_FIX_BRANCH_REG;
p->ainsn.target_br_reg = ((kprobe_inst >> 6) & 0x7);
break;
case IP_RELATIVE_PREDICT_OPCODE:
case IP_RELATIVE_BRANCH_OPCODE:
p->ainsn.inst_flag |= INST_FLAG_FIX_RELATIVE_IP_ADDR;
break;
case IP_RELATIVE_CALL_OPCODE:
p->ainsn.inst_flag |= INST_FLAG_FIX_RELATIVE_IP_ADDR;
p->ainsn.inst_flag |= INST_FLAG_FIX_BRANCH_REG;
p->ainsn.target_br_reg = ((kprobe_inst >> 6) & 0x7);
break;
}
} else if (bundle_encoding[template][slot] == X) {
switch (major_opcode) {
case LONG_CALL_OPCODE:
p->ainsn.inst_flag |= INST_FLAG_FIX_BRANCH_REG;
p->ainsn.target_br_reg = ((kprobe_inst >> 6) & 0x7);
break;
}
}
return;
}
/*
* In this function we check to see if the instruction
* on which we are inserting kprobe is supported.
* Returns 0 if supported
* Returns -EINVAL if unsupported
*/
static int unsupported_inst(uint template, uint slot, uint major_opcode,
unsigned long kprobe_inst, struct kprobe *p)
{
unsigned long addr = (unsigned long)p->addr;
if (bundle_encoding[template][slot] == I) {
switch (major_opcode) {
case 0x0: //I_UNIT_MISC_OPCODE:
/*
* Check for Integer speculation instruction
* - Bit 33-35 to be equal to 0x1
*/
if (((kprobe_inst >> 33) & 0x7) == 1) {
printk(KERN_WARNING
"Kprobes on speculation inst at <0x%lx> not supported\n",
addr);
return -EINVAL;
}
/*
* IP relative mov instruction
* - Bit 27-35 to be equal to 0x30
*/
if (((kprobe_inst >> 27) & 0x1FF) == 0x30) {
printk(KERN_WARNING
"Kprobes on \"mov r1=ip\" at <0x%lx> not supported\n",
addr);
return -EINVAL;
}
}
}
return 0;
}
/*
* In this function we check to see if the instruction
* (qp) cmpx.crel.ctype p1,p2=r2,r3
* on which we are inserting kprobe is cmp instruction
* with ctype as unc.
*/
static uint is_cmp_ctype_unc_inst(uint template, uint slot, uint major_opcode,
unsigned long kprobe_inst)
{
cmp_inst_t cmp_inst;
uint ctype_unc = 0;
if (!((bundle_encoding[template][slot] == I) ||
(bundle_encoding[template][slot] == M)))
goto out;
if (!((major_opcode == 0xC) || (major_opcode == 0xD) ||
(major_opcode == 0xE)))
goto out;
cmp_inst.l = kprobe_inst;
if ((cmp_inst.f.x2 == 0) || (cmp_inst.f.x2 == 1)) {
/* Integere compare - Register Register (A6 type)*/
if ((cmp_inst.f.tb == 0) && (cmp_inst.f.ta == 0)
&&(cmp_inst.f.c == 1))
ctype_unc = 1;
} else if ((cmp_inst.f.x2 == 2)||(cmp_inst.f.x2 == 3)) {
/* Integere compare - Immediate Register (A8 type)*/
if ((cmp_inst.f.ta == 0) &&(cmp_inst.f.c == 1))
ctype_unc = 1;
}
out:
return ctype_unc;
}
/*
* In this function we override the bundle with
* the break instruction at the given slot.
*/
static void prepare_break_inst(uint template, uint slot, uint major_opcode,
unsigned long kprobe_inst, struct kprobe *p)
{
unsigned long break_inst = BREAK_INST;
bundle_t *bundle = &p->ainsn.insn.bundle;
/*
* Copy the original kprobe_inst qualifying predicate(qp)
* to the break instruction iff !is_cmp_ctype_unc_inst
* because for cmp instruction with ctype equal to unc,
* which is a special instruction always needs to be
* executed regradless of qp
*/
if (!is_cmp_ctype_unc_inst(template, slot, major_opcode, kprobe_inst))
break_inst |= (0x3f & kprobe_inst);
switch (slot) {
case 0:
bundle->quad0.slot0 = break_inst;
break;
case 1:
bundle->quad0.slot1_p0 = break_inst;
bundle->quad1.slot1_p1 = break_inst >> (64-46);
break;
case 2:
bundle->quad1.slot2 = break_inst;
break;
}
/*
* Update the instruction flag, so that we can
* emulate the instruction properly after we
* single step on original instruction
*/
update_kprobe_inst_flag(template, slot, major_opcode, kprobe_inst, p);
}
static inline void get_kprobe_inst(bundle_t *bundle, uint slot,
unsigned long *kprobe_inst, uint *major_opcode)
{
unsigned long kprobe_inst_p0, kprobe_inst_p1;
unsigned int template;
template = bundle->quad0.template;
switch (slot) {
case 0:
*major_opcode = (bundle->quad0.slot0 >> SLOT0_OPCODE_SHIFT);
*kprobe_inst = bundle->quad0.slot0;
break;
case 1:
*major_opcode = (bundle->quad1.slot1_p1 >> SLOT1_p1_OPCODE_SHIFT);
kprobe_inst_p0 = bundle->quad0.slot1_p0;
kprobe_inst_p1 = bundle->quad1.slot1_p1;
*kprobe_inst = kprobe_inst_p0 | (kprobe_inst_p1 << (64-46));
break;
case 2:
*major_opcode = (bundle->quad1.slot2 >> SLOT2_OPCODE_SHIFT);
*kprobe_inst = bundle->quad1.slot2;
break;
}
}
static int valid_kprobe_addr(int template, int slot, unsigned long addr)
{
if ((slot > 2) || ((bundle_encoding[template][1] == L) && slot > 1)) {
printk(KERN_WARNING "Attempting to insert unaligned kprobe at 0x%lx\n",
addr);
return -EINVAL;
}
return 0;
}
static inline void save_previous_kprobe(void)
{
kprobe_prev = current_kprobe;
kprobe_status_prev = kprobe_status;
}
static inline void restore_previous_kprobe(void)
{
current_kprobe = kprobe_prev;
kprobe_status = kprobe_status_prev;
}
static inline void set_current_kprobe(struct kprobe *p)
{
current_kprobe = p;
}
int arch_prepare_kprobe(struct kprobe *p)
{
unsigned long addr = (unsigned long) p->addr;
unsigned long *kprobe_addr = (unsigned long *)(addr & ~0xFULL);
unsigned long kprobe_inst=0;
unsigned int slot = addr & 0xf, template, major_opcode = 0;
bundle_t *bundle = &p->ainsn.insn.bundle;
memcpy(&p->opcode.bundle, kprobe_addr, sizeof(bundle_t));
memcpy(&p->ainsn.insn.bundle, kprobe_addr, sizeof(bundle_t));
template = bundle->quad0.template;
if(valid_kprobe_addr(template, slot, addr))
return -EINVAL;
/* Move to slot 2, if bundle is MLX type and kprobe slot is 1 */
if (slot == 1 && bundle_encoding[template][1] == L)
slot++;
/* Get kprobe_inst and major_opcode from the bundle */
get_kprobe_inst(bundle, slot, &kprobe_inst, &major_opcode);
if (unsupported_inst(template, slot, major_opcode, kprobe_inst, p))
return -EINVAL;
prepare_break_inst(template, slot, major_opcode, kprobe_inst, p);
return 0;
}
void arch_arm_kprobe(struct kprobe *p)
{
unsigned long addr = (unsigned long)p->addr;
unsigned long arm_addr = addr & ~0xFULL;
memcpy((char *)arm_addr, &p->ainsn.insn.bundle, sizeof(bundle_t));
flush_icache_range(arm_addr, arm_addr + sizeof(bundle_t));
}
void arch_disarm_kprobe(struct kprobe *p)
{
unsigned long addr = (unsigned long)p->addr;
unsigned long arm_addr = addr & ~0xFULL;
/* p->opcode contains the original unaltered bundle */
memcpy((char *) arm_addr, (char *) &p->opcode.bundle, sizeof(bundle_t));
flush_icache_range(arm_addr, arm_addr + sizeof(bundle_t));
}
void arch_remove_kprobe(struct kprobe *p)
{
}
/*
* We are resuming execution after a single step fault, so the pt_regs
* structure reflects the register state after we executed the instruction
* located in the kprobe (p->ainsn.insn.bundle). We still need to adjust
* the ip to point back to the original stack address. To set the IP address
* to original stack address, handle the case where we need to fixup the
* relative IP address and/or fixup branch register.
*/
static void resume_execution(struct kprobe *p, struct pt_regs *regs)
{
unsigned long bundle_addr = ((unsigned long) (&p->opcode.bundle)) & ~0xFULL;
unsigned long resume_addr = (unsigned long)p->addr & ~0xFULL;
unsigned long template;
int slot = ((unsigned long)p->addr & 0xf);
template = p->opcode.bundle.quad0.template;
if (slot == 1 && bundle_encoding[template][1] == L)
slot = 2;
if (p->ainsn.inst_flag) {
if (p->ainsn.inst_flag & INST_FLAG_FIX_RELATIVE_IP_ADDR) {
/* Fix relative IP address */
regs->cr_iip = (regs->cr_iip - bundle_addr) + resume_addr;
}
if (p->ainsn.inst_flag & INST_FLAG_FIX_BRANCH_REG) {
/*
* Fix target branch register, software convention is
* to use either b0 or b6 or b7, so just checking
* only those registers
*/
switch (p->ainsn.target_br_reg) {
case 0:
if ((regs->b0 == bundle_addr) ||
(regs->b0 == bundle_addr + 0x10)) {
regs->b0 = (regs->b0 - bundle_addr) +
resume_addr;
}
break;
case 6:
if ((regs->b6 == bundle_addr) ||
(regs->b6 == bundle_addr + 0x10)) {
regs->b6 = (regs->b6 - bundle_addr) +
resume_addr;
}
break;
case 7:
if ((regs->b7 == bundle_addr) ||
(regs->b7 == bundle_addr + 0x10)) {
regs->b7 = (regs->b7 - bundle_addr) +
resume_addr;
}
break;
} /* end switch */
}
goto turn_ss_off;
}
if (slot == 2) {
if (regs->cr_iip == bundle_addr + 0x10) {
regs->cr_iip = resume_addr + 0x10;
}
} else {
if (regs->cr_iip == bundle_addr) {
regs->cr_iip = resume_addr;
}
}
turn_ss_off:
/* Turn off Single Step bit */
ia64_psr(regs)->ss = 0;
}
static void prepare_ss(struct kprobe *p, struct pt_regs *regs)
{
unsigned long bundle_addr = (unsigned long) &p->opcode.bundle;
unsigned long slot = (unsigned long)p->addr & 0xf;
/* Update instruction pointer (IIP) and slot number (IPSR.ri) */
regs->cr_iip = bundle_addr & ~0xFULL;
if (slot > 2)
slot = 0;
ia64_psr(regs)->ri = slot;
/* turn on single stepping */
ia64_psr(regs)->ss = 1;
}
static int pre_kprobes_handler(struct die_args *args)
{
struct kprobe *p;
int ret = 0;
struct pt_regs *regs = args->regs;
kprobe_opcode_t *addr = (kprobe_opcode_t *)instruction_pointer(regs);
preempt_disable();
/* Handle recursion cases */
if (kprobe_running()) {
p = get_kprobe(addr);
if (p) {
if (kprobe_status == KPROBE_HIT_SS) {
unlock_kprobes();
goto no_kprobe;
}
/* We have reentered the pre_kprobe_handler(), since
* another probe was hit while within the handler.
* We here save the original kprobes variables and
* just single step on the instruction of the new probe
* without calling any user handlers.
*/
save_previous_kprobe();
set_current_kprobe(p);
p->nmissed++;
prepare_ss(p, regs);
kprobe_status = KPROBE_REENTER;
return 1;
} else if (args->err == __IA64_BREAK_JPROBE) {
/*
* jprobe instrumented function just completed
*/
p = current_kprobe;
if (p->break_handler && p->break_handler(p, regs)) {
goto ss_probe;
}
} else {
/* Not our break */
goto no_kprobe;
}
}
lock_kprobes();
p = get_kprobe(addr);
if (!p) {
unlock_kprobes();
goto no_kprobe;
}
kprobe_status = KPROBE_HIT_ACTIVE;
set_current_kprobe(p);
if (p->pre_handler && p->pre_handler(p, regs))
/*
* Our pre-handler is specifically requesting that we just
* do a return. This is handling the case where the
* pre-handler is really our special jprobe pre-handler.
*/
return 1;
ss_probe:
prepare_ss(p, regs);
kprobe_status = KPROBE_HIT_SS;
return 1;
no_kprobe:
preempt_enable_no_resched();
return ret;
}
static int post_kprobes_handler(struct pt_regs *regs)
{
if (!kprobe_running())
return 0;
if ((kprobe_status != KPROBE_REENTER) && current_kprobe->post_handler) {
kprobe_status = KPROBE_HIT_SSDONE;
current_kprobe->post_handler(current_kprobe, regs, 0);
}
resume_execution(current_kprobe, regs);
/*Restore back the original saved kprobes variables and continue. */
if (kprobe_status == KPROBE_REENTER) {
restore_previous_kprobe();
goto out;
}
unlock_kprobes();
out:
preempt_enable_no_resched();
return 1;
}
static int kprobes_fault_handler(struct pt_regs *regs, int trapnr)
{
if (!kprobe_running())
return 0;
if (current_kprobe->fault_handler &&
current_kprobe->fault_handler(current_kprobe, regs, trapnr))
return 1;
if (kprobe_status & KPROBE_HIT_SS) {
resume_execution(current_kprobe, regs);
unlock_kprobes();
preempt_enable_no_resched();
}
return 0;
}
int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val,
void *data)
{
struct die_args *args = (struct die_args *)data;
switch(val) {
case DIE_BREAK:
if (pre_kprobes_handler(args))
return NOTIFY_STOP;
break;
case DIE_SS:
if (post_kprobes_handler(args->regs))
return NOTIFY_STOP;
break;
case DIE_PAGE_FAULT:
if (kprobes_fault_handler(args->regs, args->trapnr))
return NOTIFY_STOP;
default:
break;
}
return NOTIFY_DONE;
}
int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
{
struct jprobe *jp = container_of(p, struct jprobe, kp);
unsigned long addr = ((struct fnptr *)(jp->entry))->ip;
/* save architectural state */
jprobe_saved_regs = *regs;
/* after rfi, execute the jprobe instrumented function */
regs->cr_iip = addr & ~0xFULL;
ia64_psr(regs)->ri = addr & 0xf;
regs->r1 = ((struct fnptr *)(jp->entry))->gp;
/*
* fix the return address to our jprobe_inst_return() function
* in the jprobes.S file
*/
regs->b0 = ((struct fnptr *)(jprobe_inst_return))->ip;
return 1;
}
int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
{
*regs = jprobe_saved_regs;
return 1;
}

View file

@ -21,12 +21,26 @@
#include <asm/intrinsics.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
#include <asm/kdebug.h>
extern spinlock_t timerlist_lock;
fpswa_interface_t *fpswa_interface;
EXPORT_SYMBOL(fpswa_interface);
struct notifier_block *ia64die_chain;
static DEFINE_SPINLOCK(die_notifier_lock);
int register_die_notifier(struct notifier_block *nb)
{
int err = 0;
unsigned long flags;
spin_lock_irqsave(&die_notifier_lock, flags);
err = notifier_chain_register(&ia64die_chain, nb);
spin_unlock_irqrestore(&die_notifier_lock, flags);
return err;
}
void __init
trap_init (void)
{
@ -137,6 +151,10 @@ ia64_bad_break (unsigned long break_num, struct pt_regs *regs)
switch (break_num) {
case 0: /* unknown error (used by GCC for __builtin_abort()) */
if (notify_die(DIE_BREAK, "break 0", regs, break_num, TRAP_BRKPT, SIGTRAP)
== NOTIFY_STOP) {
return;
}
die_if_kernel("bugcheck!", regs, break_num);
sig = SIGILL; code = ILL_ILLOPC;
break;
@ -189,6 +207,15 @@ ia64_bad_break (unsigned long break_num, struct pt_regs *regs)
sig = SIGILL; code = __ILL_BNDMOD;
break;
case 0x80200:
case 0x80300:
if (notify_die(DIE_BREAK, "kprobe", regs, break_num, TRAP_BRKPT, SIGTRAP)
== NOTIFY_STOP) {
return;
}
sig = SIGTRAP; code = TRAP_BRKPT;
break;
default:
if (break_num < 0x40000 || break_num > 0x100000)
die_if_kernel("Bad break", regs, break_num);
@ -548,7 +575,11 @@ ia64_fault (unsigned long vector, unsigned long isr, unsigned long ifa,
#endif
break;
case 35: siginfo.si_code = TRAP_BRANCH; ifa = 0; break;
case 36: siginfo.si_code = TRAP_TRACE; ifa = 0; break;
case 36:
if (notify_die(DIE_SS, "ss", &regs, vector,
vector, SIGTRAP) == NOTIFY_STOP)
return;
siginfo.si_code = TRAP_TRACE; ifa = 0; break;
}
siginfo.si_signo = SIGTRAP;
siginfo.si_errno = 0;

View file

@ -560,14 +560,15 @@ void show_mem(void)
int shared = 0, cached = 0, reserved = 0;
printk("Node ID: %d\n", pgdat->node_id);
for(i = 0; i < pgdat->node_spanned_pages; i++) {
struct page *page = pgdat_page_nr(pgdat, i);
if (!ia64_pfn_valid(pgdat->node_start_pfn+i))
continue;
if (PageReserved(pgdat->node_mem_map+i))
if (PageReserved(page))
reserved++;
else if (PageSwapCache(pgdat->node_mem_map+i))
else if (PageSwapCache(page))
cached++;
else if (page_count(pgdat->node_mem_map+i))
shared += page_count(pgdat->node_mem_map+i)-1;
else if (page_count(page))
shared += page_count(page)-1;
}
total_present += present;
total_reserved += reserved;

View file

@ -14,6 +14,7 @@
#include <asm/processor.h>
#include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/kdebug.h>
extern void die (char *, struct pt_regs *, long);
@ -102,6 +103,13 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
goto bad_area_no_up;
#endif
/*
* This is to handle the kprobes on user space access instructions
*/
if (notify_die(DIE_PAGE_FAULT, "page fault", regs, code, TRAP_BRKPT,
SIGSEGV) == NOTIFY_STOP)
return;
down_read(&mm->mmap_sem);
vma = find_vma_prev(mm, address, &prev_vma);

View file

@ -172,11 +172,13 @@ config NOHIGHMEM
bool
default y
config DISCONTIGMEM
config ARCH_DISCONTIGMEM_ENABLE
bool "Internal RAM Support"
depends on CHIP_M32700 || CHIP_M32102 || CHIP_VDEC2 || CHIP_OPSP
default y
source "mm/Kconfig"
config IRAM_START
hex "Internal memory start address (hex)"
default "00f00000"

View file

@ -49,7 +49,7 @@ void show_mem(void)
printk("Free swap: %6ldkB\n",nr_swap_pages<<(PAGE_SHIFT-10));
for_each_pgdat(pgdat) {
for (i = 0; i < pgdat->node_spanned_pages; ++i) {
page = pgdat->node_mem_map + i;
page = pgdat_page_nr(pgdat, i);
total++;
if (PageHighMem(page))
highmem++;
@ -152,7 +152,7 @@ int __init reservedpages_count(void)
reservedpages = 0;
for_each_online_node(nid)
for (i = 0 ; i < MAX_LOW_PFN(nid) - START_PFN(nid) ; i++)
if (PageReserved(NODE_DATA(nid)->node_mem_map + i))
if (PageReserved(nid_page_nr(nid, i)))
reservedpages++;
return reservedpages;

View file

@ -357,6 +357,8 @@ config 060_WRITETHROUGH
is hardwired on. The 53c710 SCSI driver is known to suffer from
this problem.
source "mm/Kconfig"
endmenu
menu "General setup"

View file

@ -532,6 +532,8 @@ config ROMKERNEL
endchoice
source "mm/Kconfig"
endmenu
config ISA_DMA_API

View file

@ -492,7 +492,7 @@ config SGI_SN0_N_MODE
which allows for more memory. Your system is most probably
running in M-Mode, so you should say N here.
config DISCONTIGMEM
config ARCH_DISCONTIGMEM_ENABLE
bool
default y if SGI_IP27
help

View file

@ -82,7 +82,7 @@ CONFIG_STOP_MACHINE=y
# CONFIG_SGI_IP22 is not set
CONFIG_SGI_IP27=y
# CONFIG_SGI_SN0_N_MODE is not set
CONFIG_DISCONTIGMEM=y
CONFIG_ARCH_DISCONTIGMEM_ENABLE=y
CONFIG_NUMA=y
# CONFIG_MAPPED_KERNEL is not set
# CONFIG_REPLICATE_KTEXT is not set

View file

@ -549,9 +549,8 @@ void __init mem_init(void)
*/
numslots = node_getlastslot(node);
for (slot = 1; slot <= numslots; slot++) {
p = NODE_DATA(node)->node_mem_map +
(slot_getbasepfn(node, slot) -
slot_getbasepfn(node, 0));
p = nid_page_nr(node, slot_getbasepfn(node, slot) -
slot_getbasepfn(node, 0));
/*
* Free valid memory in current slot.

View file

@ -148,7 +148,7 @@ config HOTPLUG_CPU
default y if SMP
select HOTPLUG
config DISCONTIGMEM
config ARCH_DISCONTIGMEM_ENABLE
bool "Discontiguous memory support (EXPERIMENTAL)"
depends on EXPERIMENTAL
help
@ -157,6 +157,8 @@ config DISCONTIGMEM
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
source "mm/Kconfig"
config PREEMPT
bool
# bool "Preemptible Kernel"

View file

@ -506,7 +506,7 @@ void show_mem(void)
for (j = node_start_pfn(i); j < node_end_pfn(i); j++) {
struct page *p;
p = node_mem_map(i) + j - node_start_pfn(i);
p = nid_page_nr(i, j) - node_start_pfn(i);
total++;
if (PageReserved(p))

View file

@ -905,6 +905,8 @@ config PREEMPT
config HIGHMEM
bool "High memory support"
source "mm/Kconfig"
source "fs/Kconfig.binfmt"
config PROC_DEVICETREE

View file

@ -222,7 +222,7 @@ decompress_kernel(unsigned long load_addr, int num_words, unsigned long cksum)
puts("\n");
puts("Uncompressing Linux...");
gunzip(0x0, 0x400000, zimage_start, &zimage_size);
gunzip(NULL, 0x400000, zimage_start, &zimage_size);
puts("done.\n");
/* get the bi_rec address */

View file

@ -33,7 +33,7 @@
#define MPC10X_PCI_OP(rw, size, type, op, mask) \
static void \
mpc10x_##rw##_config_##size(unsigned int *cfg_addr, \
mpc10x_##rw##_config_##size(unsigned int __iomem *cfg_addr, \
unsigned int *cfg_data, int devfn, int offset, \
type val) \
{ \

View file

@ -77,6 +77,10 @@ config PPC_PSERIES
bool " IBM pSeries & new iSeries"
default y
config PPC_BPA
bool " Broadband Processor Architecture"
depends on PPC_MULTIPLATFORM
config PPC_PMAC
depends on PPC_MULTIPLATFORM
bool " Apple G5 based machines"
@ -106,6 +110,21 @@ config PPC_OF
bool
default y
config XICS
depends on PPC_PSERIES
bool
default y
config MPIC
depends on PPC_PSERIES || PPC_PMAC || PPC_MAPLE
bool
default y
config BPA_IIC
depends on PPC_BPA
bool
default y
# VMX is pSeries only for now until somebody writes the iSeries
# exception vectors for it
config ALTIVEC
@ -198,13 +217,49 @@ config HMT
This option enables hardware multithreading on RS64 cpus.
pSeries systems p620 and p660 have such a cpu type.
config DISCONTIGMEM
bool "Discontiguous Memory Support"
config ARCH_SELECT_MEMORY_MODEL
def_bool y
config ARCH_FLATMEM_ENABLE
def_bool y
depends on !NUMA
config ARCH_DISCONTIGMEM_ENABLE
def_bool y
depends on SMP && PPC_PSERIES
config ARCH_DISCONTIGMEM_DEFAULT
def_bool y
depends on ARCH_DISCONTIGMEM_ENABLE
config ARCH_FLATMEM_ENABLE
def_bool y
config ARCH_SPARSEMEM_ENABLE
def_bool y
depends on ARCH_DISCONTIGMEM_ENABLE
source "mm/Kconfig"
config HAVE_ARCH_EARLY_PFN_TO_NID
def_bool y
depends on NEED_MULTIPLE_NODES
# Some NUMA nodes have memory ranges that span
# other nodes. Even though a pfn is valid and
# between a node's start and end pfns, it may not
# reside on that node.
#
# This is a relatively temporary hack that should
# be able to go away when sparsemem is fully in
# place
config NODES_SPAN_OTHER_NODES
def_bool y
depends on NEED_MULTIPLE_NODES
config NUMA
bool "NUMA support"
depends on DISCONTIGMEM
default y if DISCONTIGMEM || SPARSEMEM
config SCHED_SMT
bool "SMT (Hyperthreading) scheduler support"
@ -256,7 +311,7 @@ config MSCHUNKS
config PPC_RTAS
bool
depends on PPC_PSERIES
depends on PPC_PSERIES || PPC_BPA
default y
config RTAS_PROC

View file

@ -90,12 +90,14 @@ boot := arch/ppc64/boot
boottarget-$(CONFIG_PPC_PSERIES) := zImage zImage.initrd
boottarget-$(CONFIG_PPC_MAPLE) := zImage zImage.initrd
boottarget-$(CONFIG_PPC_ISERIES) := vmlinux.sminitrd vmlinux.initrd vmlinux.sm
boottarget-$(CONFIG_PPC_BPA) := zImage zImage.initrd
$(boottarget-y): vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
bootimage-$(CONFIG_PPC_PSERIES) := $(boot)/zImage
bootimage-$(CONFIG_PPC_PMAC) := vmlinux
bootimage-$(CONFIG_PPC_MAPLE) := $(boot)/zImage
bootimage-$(CONFIG_PPC_BPA) := zImage
bootimage-$(CONFIG_PPC_ISERIES) := vmlinux
BOOTIMAGE := $(bootimage-y)
install: vmlinux

View file

@ -22,8 +22,8 @@
# User may have a custom install script
if [ -x ~/bin/installkernel ]; then exec ~/bin/installkernel "$@"; fi
if [ -x /sbin/installkernel ]; then exec /sbin/installkernel "$@"; fi
if [ -x ~/bin/${CROSS_COMPILE}installkernel ]; then exec ~/bin/${CROSS_COMPILE}installkernel "$@"; fi
if [ -x /sbin/${CROSS_COMPILE}installkernel ]; then exec /sbin/${CROSS_COMPILE}installkernel "$@"; fi
# Default install

View file

@ -88,7 +88,7 @@ CONFIG_IBMVIO=y
CONFIG_IOMMU_VMERGE=y
CONFIG_SMP=y
CONFIG_NR_CPUS=128
CONFIG_DISCONTIGMEM=y
CONFIG_ARCH_DISCONTIGMEM_ENABLE=y
CONFIG_NUMA=y
CONFIG_SCHED_SMT=y
# CONFIG_PREEMPT is not set

View file

@ -89,7 +89,7 @@ CONFIG_BOOTX_TEXT=y
CONFIG_IOMMU_VMERGE=y
CONFIG_SMP=y
CONFIG_NR_CPUS=32
CONFIG_DISCONTIGMEM=y
CONFIG_ARCH_DISCONTIGMEM_ENABLE=y
# CONFIG_NUMA is not set
# CONFIG_SCHED_SMT is not set
# CONFIG_PREEMPT is not set

View file

@ -27,17 +27,21 @@ obj-$(CONFIG_PPC_ISERIES) += HvCall.o HvLpConfig.o LparData.o \
mf.o HvLpEvent.o iSeries_proc.o iSeries_htab.o \
iSeries_iommu.o
obj-$(CONFIG_PPC_MULTIPLATFORM) += nvram.o i8259.o prom_init.o prom.o mpic.o
obj-$(CONFIG_PPC_MULTIPLATFORM) += nvram.o i8259.o prom_init.o prom.o
obj-$(CONFIG_PPC_PSERIES) += pSeries_pci.o pSeries_lpar.o pSeries_hvCall.o \
pSeries_nvram.o rtasd.o ras.o pSeries_reconfig.o \
xics.o rtas.o pSeries_setup.o pSeries_iommu.o
pSeries_setup.o pSeries_iommu.o
obj-$(CONFIG_PPC_BPA) += bpa_setup.o bpa_iommu.o bpa_nvram.o \
bpa_iic.o spider-pic.o
obj-$(CONFIG_EEH) += eeh.o
obj-$(CONFIG_PROC_FS) += proc_ppc64.o
obj-$(CONFIG_RTAS_FLASH) += rtas_flash.o
obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_MODULES) += module.o ppc_ksyms.o
obj-$(CONFIG_PPC_RTAS) += rtas.o rtas_pci.o
obj-$(CONFIG_RTAS_PROC) += rtas-proc.o
obj-$(CONFIG_SCANLOG) += scanlog.o
obj-$(CONFIG_VIOPATH) += viopath.o
@ -46,6 +50,8 @@ obj-$(CONFIG_HVC_CONSOLE) += hvconsole.o
obj-$(CONFIG_BOOTX_TEXT) += btext.o
obj-$(CONFIG_HVCS) += hvcserver.o
obj-$(CONFIG_IBMVIO) += vio.o
obj-$(CONFIG_XICS) += xics.o
obj-$(CONFIG_MPIC) += mpic.o
obj-$(CONFIG_PPC_PMAC) += pmac_setup.o pmac_feature.o pmac_pci.o \
pmac_time.o pmac_nvram.o pmac_low_i2c.o
@ -58,6 +64,7 @@ ifdef CONFIG_SMP
obj-$(CONFIG_PPC_PMAC) += pmac_smp.o smp-tbsync.o
obj-$(CONFIG_PPC_ISERIES) += iSeries_smp.o
obj-$(CONFIG_PPC_PSERIES) += pSeries_smp.o
obj-$(CONFIG_PPC_BPA) += pSeries_smp.o
obj-$(CONFIG_PPC_MAPLE) += smp-tbsync.o
endif

270
arch/ppc64/kernel/bpa_iic.c Normal file
View file

@ -0,0 +1,270 @@
/*
* BPA Internal Interrupt Controller
*
* (C) Copyright IBM Deutschland Entwicklung GmbH 2005
*
* Author: Arnd Bergmann <arndb@de.ibm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <linux/config.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/percpu.h>
#include <linux/types.h>
#include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/prom.h>
#include <asm/ptrace.h>
#include "bpa_iic.h"
struct iic_pending_bits {
u32 data;
u8 flags;
u8 class;
u8 source;
u8 prio;
};
enum iic_pending_flags {
IIC_VALID = 0x80,
IIC_IPI = 0x40,
};
struct iic_regs {
struct iic_pending_bits pending;
struct iic_pending_bits pending_destr;
u64 generate;
u64 prio;
};
struct iic {
struct iic_regs __iomem *regs;
};
static DEFINE_PER_CPU(struct iic, iic);
void iic_local_enable(void)
{
out_be64(&__get_cpu_var(iic).regs->prio, 0xff);
}
void iic_local_disable(void)
{
out_be64(&__get_cpu_var(iic).regs->prio, 0x0);
}
static unsigned int iic_startup(unsigned int irq)
{
return 0;
}
static void iic_enable(unsigned int irq)
{
iic_local_enable();
}
static void iic_disable(unsigned int irq)
{
}
static void iic_end(unsigned int irq)
{
iic_local_enable();
}
static struct hw_interrupt_type iic_pic = {
.typename = " BPA-IIC ",
.startup = iic_startup,
.enable = iic_enable,
.disable = iic_disable,
.end = iic_end,
};
static int iic_external_get_irq(struct iic_pending_bits pending)
{
int irq;
unsigned char node, unit;
node = pending.source >> 4;
unit = pending.source & 0xf;
irq = -1;
/*
* This mapping is specific to the Broadband
* Engine. We might need to get the numbers
* from the device tree to support future CPUs.
*/
switch (unit) {
case 0x00:
case 0x0b:
/*
* One of these units can be connected
* to an external interrupt controller.
*/
if (pending.prio > 0x3f ||
pending.class != 2)
break;
irq = IIC_EXT_OFFSET
+ spider_get_irq(pending.prio + node * IIC_NODE_STRIDE)
+ node * IIC_NODE_STRIDE;
break;
case 0x01 ... 0x04:
case 0x07 ... 0x0a:
/*
* These units are connected to the SPEs
*/
if (pending.class > 2)
break;
irq = IIC_SPE_OFFSET
+ pending.class * IIC_CLASS_STRIDE
+ node * IIC_NODE_STRIDE
+ unit;
break;
}
if (irq == -1)
printk(KERN_WARNING "Unexpected interrupt class %02x, "
"source %02x, prio %02x, cpu %02x\n", pending.class,
pending.source, pending.prio, smp_processor_id());
return irq;
}
/* Get an IRQ number from the pending state register of the IIC */
int iic_get_irq(struct pt_regs *regs)
{
struct iic *iic;
int irq;
struct iic_pending_bits pending;
iic = &__get_cpu_var(iic);
*(unsigned long *) &pending =
in_be64((unsigned long __iomem *) &iic->regs->pending_destr);
irq = -1;
if (pending.flags & IIC_VALID) {
if (pending.flags & IIC_IPI) {
irq = IIC_IPI_OFFSET + (pending.prio >> 4);
/*
if (irq > 0x80)
printk(KERN_WARNING "Unexpected IPI prio %02x"
"on CPU %02x\n", pending.prio,
smp_processor_id());
*/
} else {
irq = iic_external_get_irq(pending);
}
}
return irq;
}
static struct iic_regs __iomem *find_iic(int cpu)
{
struct device_node *np;
int nodeid = cpu / 2;
unsigned long regs;
struct iic_regs __iomem *iic_regs;
for (np = of_find_node_by_type(NULL, "cpu");
np;
np = of_find_node_by_type(np, "cpu")) {
if (nodeid == *(int *)get_property(np, "node-id", NULL))
break;
}
if (!np) {
printk(KERN_WARNING "IIC: CPU %d not found\n", cpu);
iic_regs = NULL;
} else {
regs = *(long *)get_property(np, "iic", NULL);
/* hack until we have decided on the devtree info */
regs += 0x400;
if (cpu & 1)
regs += 0x20;
printk(KERN_DEBUG "IIC for CPU %d at %lx\n", cpu, regs);
iic_regs = __ioremap(regs, sizeof(struct iic_regs),
_PAGE_NO_CACHE);
}
return iic_regs;
}
#ifdef CONFIG_SMP
void iic_setup_cpu(void)
{
out_be64(&__get_cpu_var(iic).regs->prio, 0xff);
}
void iic_cause_IPI(int cpu, int mesg)
{
out_be64(&per_cpu(iic, cpu).regs->generate, mesg);
}
static irqreturn_t iic_ipi_action(int irq, void *dev_id, struct pt_regs *regs)
{
smp_message_recv(irq - IIC_IPI_OFFSET, regs);
return IRQ_HANDLED;
}
static void iic_request_ipi(int irq, const char *name)
{
/* IPIs are marked SA_INTERRUPT as they must run with irqs
* disabled */
get_irq_desc(irq)->handler = &iic_pic;
get_irq_desc(irq)->status |= IRQ_PER_CPU;
request_irq(irq, iic_ipi_action, SA_INTERRUPT, name, NULL);
}
void iic_request_IPIs(void)
{
iic_request_ipi(IIC_IPI_OFFSET + PPC_MSG_CALL_FUNCTION, "IPI-call");
iic_request_ipi(IIC_IPI_OFFSET + PPC_MSG_RESCHEDULE, "IPI-resched");
#ifdef CONFIG_DEBUGGER
iic_request_ipi(IIC_IPI_OFFSET + PPC_MSG_DEBUGGER_BREAK, "IPI-debug");
#endif /* CONFIG_DEBUGGER */
}
#endif /* CONFIG_SMP */
static void iic_setup_spe_handlers(void)
{
int be, isrc;
/* Assume two threads per BE are present */
for (be=0; be < num_present_cpus() / 2; be++) {
for (isrc = 0; isrc < IIC_CLASS_STRIDE * 3; isrc++) {
int irq = IIC_NODE_STRIDE * be + IIC_SPE_OFFSET + isrc;
get_irq_desc(irq)->handler = &iic_pic;
}
}
}
void iic_init_IRQ(void)
{
int cpu, irq_offset;
struct iic *iic;
irq_offset = 0;
for_each_cpu(cpu) {
iic = &per_cpu(iic, cpu);
iic->regs = find_iic(cpu);
if (iic->regs)
out_be64(&iic->regs->prio, 0xff);
}
iic_setup_spe_handlers();
}

View file

@ -0,0 +1,62 @@
#ifndef ASM_BPA_IIC_H
#define ASM_BPA_IIC_H
#ifdef __KERNEL__
/*
* Mapping of IIC pending bits into per-node
* interrupt numbers.
*
* IRQ FF CC SS PP FF CC SS PP Description
*
* 00-3f 80 02 +0 00 - 80 02 +0 3f South Bridge
* 00-3f 80 02 +b 00 - 80 02 +b 3f South Bridge
* 41-4a 80 00 +1 ** - 80 00 +a ** SPU Class 0
* 51-5a 80 01 +1 ** - 80 01 +a ** SPU Class 1
* 61-6a 80 02 +1 ** - 80 02 +a ** SPU Class 2
* 70-7f C0 ** ** 00 - C0 ** ** 0f IPI
*
* F flags
* C class
* S source
* P Priority
* + node number
* * don't care
*
* A node consists of a Broadband Engine and an optional
* south bridge device providing a maximum of 64 IRQs.
* The south bridge may be connected to either IOIF0
* or IOIF1.
* Each SPE is represented as three IRQ lines, one per
* interrupt class.
* 16 IRQ numbers are reserved for inter processor
* interruptions, although these are only used in the
* range of the first node.
*
* This scheme needs 128 IRQ numbers per BIF node ID,
* which means that with the total of 512 lines
* available, we can have a maximum of four nodes.
*/
enum {
IIC_EXT_OFFSET = 0x00, /* Start of south bridge IRQs */
IIC_NUM_EXT = 0x40, /* Number of south bridge IRQs */
IIC_SPE_OFFSET = 0x40, /* Start of SPE interrupts */
IIC_CLASS_STRIDE = 0x10, /* SPE IRQs per class */
IIC_IPI_OFFSET = 0x70, /* Start of IPI IRQs */
IIC_NUM_IPIS = 0x10, /* IRQs reserved for IPI */
IIC_NODE_STRIDE = 0x80, /* Total IRQs per node */
};
extern void iic_init_IRQ(void);
extern int iic_get_irq(struct pt_regs *regs);
extern void iic_cause_IPI(int cpu, int mesg);
extern void iic_request_IPIs(void);
extern void iic_setup_cpu(void);
extern void iic_local_enable(void);
extern void iic_local_disable(void);
extern void spider_init_IRQ(void);
extern int spider_get_irq(unsigned long int_pending);
#endif
#endif /* ASM_BPA_IIC_H */

View file

@ -0,0 +1,377 @@
/*
* IOMMU implementation for Broadband Processor Architecture
* We just establish a linear mapping at boot by setting all the
* IOPT cache entries in the CPU.
* The mapping functions should be identical to pci_direct_iommu,
* except for the handling of the high order bit that is required
* by the Spider bridge. These should be split into a separate
* file at the point where we get a different bridge chip.
*
* Copyright (C) 2005 IBM Deutschland Entwicklung GmbH,
* Arnd Bergmann <arndb@de.ibm.com>
*
* Based on linear mapping
* Copyright (C) 2003 Benjamin Herrenschmidt (benh@kernel.crashing.org)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#undef DEBUG
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/delay.h>
#include <linux/string.h>
#include <linux/init.h>
#include <linux/bootmem.h>
#include <linux/mm.h>
#include <linux/dma-mapping.h>
#include <asm/sections.h>
#include <asm/iommu.h>
#include <asm/io.h>
#include <asm/prom.h>
#include <asm/pci-bridge.h>
#include <asm/machdep.h>
#include <asm/pmac_feature.h>
#include <asm/abs_addr.h>
#include <asm/system.h>
#include "pci.h"
#include "bpa_iommu.h"
static inline unsigned long
get_iopt_entry(unsigned long real_address, unsigned long ioid,
unsigned long prot)
{
return (prot & IOPT_PROT_MASK)
| (IOPT_COHERENT)
| (IOPT_ORDER_VC)
| (real_address & IOPT_RPN_MASK)
| (ioid & IOPT_IOID_MASK);
}
typedef struct {
unsigned long val;
} ioste;
static inline ioste
mk_ioste(unsigned long val)
{
ioste ioste = { .val = val, };
return ioste;
}
static inline ioste
get_iost_entry(unsigned long iopt_base, unsigned long io_address, unsigned page_size)
{
unsigned long ps;
unsigned long iostep;
unsigned long nnpt;
unsigned long shift;
switch (page_size) {
case 0x1000000:
ps = IOST_PS_16M;
nnpt = 0; /* one page per segment */
shift = 5; /* segment has 16 iopt entries */
break;
case 0x100000:
ps = IOST_PS_1M;
nnpt = 0; /* one page per segment */
shift = 1; /* segment has 256 iopt entries */
break;
case 0x10000:
ps = IOST_PS_64K;
nnpt = 0x07; /* 8 pages per io page table */
shift = 0; /* all entries are used */
break;
case 0x1000:
ps = IOST_PS_4K;
nnpt = 0x7f; /* 128 pages per io page table */
shift = 0; /* all entries are used */
break;
default: /* not a known compile time constant */
BUILD_BUG_ON(1);
break;
}
iostep = iopt_base +
/* need 8 bytes per iopte */
(((io_address / page_size * 8)
/* align io page tables on 4k page boundaries */
<< shift)
/* nnpt+1 pages go into each iopt */
& ~(nnpt << 12));
nnpt++; /* this seems to work, but the documentation is not clear
about wether we put nnpt or nnpt-1 into the ioste bits.
In theory, this can't work for 4k pages. */
return mk_ioste(IOST_VALID_MASK
| (iostep & IOST_PT_BASE_MASK)
| ((nnpt << 5) & IOST_NNPT_MASK)
| (ps & IOST_PS_MASK));
}
/* compute the address of an io pte */
static inline unsigned long
get_ioptep(ioste iost_entry, unsigned long io_address)
{
unsigned long iopt_base;
unsigned long page_size;
unsigned long page_number;
unsigned long iopt_offset;
iopt_base = iost_entry.val & IOST_PT_BASE_MASK;
page_size = iost_entry.val & IOST_PS_MASK;
/* decode page size to compute page number */
page_number = (io_address & 0x0fffffff) >> (10 + 2 * page_size);
/* page number is an offset into the io page table */
iopt_offset = (page_number << 3) & 0x7fff8ul;
return iopt_base + iopt_offset;
}
/* compute the tag field of the iopt cache entry */
static inline unsigned long
get_ioc_tag(ioste iost_entry, unsigned long io_address)
{
unsigned long iopte = get_ioptep(iost_entry, io_address);
return IOPT_VALID_MASK
| ((iopte & 0x00000000000000ff8ul) >> 3)
| ((iopte & 0x0000003fffffc0000ul) >> 9);
}
/* compute the hashed 6 bit index for the 4-way associative pte cache */
static inline unsigned long
get_ioc_hash(ioste iost_entry, unsigned long io_address)
{
unsigned long iopte = get_ioptep(iost_entry, io_address);
return ((iopte & 0x000000000000001f8ul) >> 3)
^ ((iopte & 0x00000000000020000ul) >> 17)
^ ((iopte & 0x00000000000010000ul) >> 15)
^ ((iopte & 0x00000000000008000ul) >> 13)
^ ((iopte & 0x00000000000004000ul) >> 11)
^ ((iopte & 0x00000000000002000ul) >> 9)
^ ((iopte & 0x00000000000001000ul) >> 7);
}
/* same as above, but pretend that we have a simpler 1-way associative
pte cache with an 8 bit index */
static inline unsigned long
get_ioc_hash_1way(ioste iost_entry, unsigned long io_address)
{
unsigned long iopte = get_ioptep(iost_entry, io_address);
return ((iopte & 0x000000000000001f8ul) >> 3)
^ ((iopte & 0x00000000000020000ul) >> 17)
^ ((iopte & 0x00000000000010000ul) >> 15)
^ ((iopte & 0x00000000000008000ul) >> 13)
^ ((iopte & 0x00000000000004000ul) >> 11)
^ ((iopte & 0x00000000000002000ul) >> 9)
^ ((iopte & 0x00000000000001000ul) >> 7)
^ ((iopte & 0x0000000000000c000ul) >> 8);
}
static inline ioste
get_iost_cache(void __iomem *base, unsigned long index)
{
unsigned long __iomem *p = (base + IOC_ST_CACHE_DIR);
return mk_ioste(in_be64(&p[index]));
}
static inline void
set_iost_cache(void __iomem *base, unsigned long index, ioste ste)
{
unsigned long __iomem *p = (base + IOC_ST_CACHE_DIR);
pr_debug("ioste %02lx was %016lx, store %016lx", index,
get_iost_cache(base, index).val, ste.val);
out_be64(&p[index], ste.val);
pr_debug(" now %016lx\n", get_iost_cache(base, index).val);
}
static inline unsigned long
get_iopt_cache(void __iomem *base, unsigned long index, unsigned long *tag)
{
unsigned long __iomem *tags = (void *)(base + IOC_PT_CACHE_DIR);
unsigned long __iomem *p = (void *)(base + IOC_PT_CACHE_REG);
*tag = tags[index];
rmb();
return *p;
}
static inline void
set_iopt_cache(void __iomem *base, unsigned long index,
unsigned long tag, unsigned long val)
{
unsigned long __iomem *tags = base + IOC_PT_CACHE_DIR;
unsigned long __iomem *p = base + IOC_PT_CACHE_REG;
pr_debug("iopt %02lx was v%016lx/t%016lx, store v%016lx/t%016lx\n",
index, get_iopt_cache(base, index, &oldtag), oldtag, val, tag);
out_be64(p, val);
out_be64(&tags[index], tag);
}
static inline void
set_iost_origin(void __iomem *base)
{
unsigned long __iomem *p = base + IOC_ST_ORIGIN;
unsigned long origin = IOSTO_ENABLE | IOSTO_SW;
pr_debug("iost_origin %016lx, now %016lx\n", in_be64(p), origin);
out_be64(p, origin);
}
static inline void
set_iocmd_config(void __iomem *base)
{
unsigned long __iomem *p = base + 0xc00;
unsigned long conf;
conf = in_be64(p);
pr_debug("iost_conf %016lx, now %016lx\n", conf, conf | IOCMD_CONF_TE);
out_be64(p, conf | IOCMD_CONF_TE);
}
/* FIXME: get these from the device tree */
#define ioc_base 0x20000511000ull
#define ioc_mmio_base 0x20000510000ull
#define ioid 0x48a
#define iopt_phys_offset (- 0x20000000) /* We have a 512MB offset from the SB */
#define io_page_size 0x1000000
static unsigned long map_iopt_entry(unsigned long address)
{
switch (address >> 20) {
case 0x600:
address = 0x24020000000ull; /* spider i/o */
break;
default:
address += iopt_phys_offset;
break;
}
return get_iopt_entry(address, ioid, IOPT_PROT_RW);
}
static void iommu_bus_setup_null(struct pci_bus *b) { }
static void iommu_dev_setup_null(struct pci_dev *d) { }
/* initialize the iommu to support a simple linear mapping
* for each DMA window used by any device. For now, we
* happen to know that there is only one DMA window in use,
* starting at iopt_phys_offset. */
static void bpa_map_iommu(void)
{
unsigned long address;
void __iomem *base;
ioste ioste;
unsigned long index;
base = __ioremap(ioc_base, 0x1000, _PAGE_NO_CACHE);
pr_debug("%lx mapped to %p\n", ioc_base, base);
set_iocmd_config(base);
iounmap(base);
base = __ioremap(ioc_mmio_base, 0x1000, _PAGE_NO_CACHE);
pr_debug("%lx mapped to %p\n", ioc_mmio_base, base);
set_iost_origin(base);
for (address = 0; address < 0x100000000ul; address += io_page_size) {
ioste = get_iost_entry(0x10000000000ul, address, io_page_size);
if ((address & 0xfffffff) == 0) /* segment start */
set_iost_cache(base, address >> 28, ioste);
index = get_ioc_hash_1way(ioste, address);
pr_debug("addr %08lx, index %02lx, ioste %016lx\n",
address, index, ioste.val);
set_iopt_cache(base,
get_ioc_hash_1way(ioste, address),
get_ioc_tag(ioste, address),
map_iopt_entry(address));
}
iounmap(base);
}
static void *bpa_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, unsigned int __nocast flag)
{
void *ret;
ret = (void *)__get_free_pages(flag, get_order(size));
if (ret != NULL) {
memset(ret, 0, size);
*dma_handle = virt_to_abs(ret) | BPA_DMA_VALID;
}
return ret;
}
static void bpa_free_coherent(struct device *hwdev, size_t size,
void *vaddr, dma_addr_t dma_handle)
{
free_pages((unsigned long)vaddr, get_order(size));
}
static dma_addr_t bpa_map_single(struct device *hwdev, void *ptr,
size_t size, enum dma_data_direction direction)
{
return virt_to_abs(ptr) | BPA_DMA_VALID;
}
static void bpa_unmap_single(struct device *hwdev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction direction)
{
}
static int bpa_map_sg(struct device *hwdev, struct scatterlist *sg,
int nents, enum dma_data_direction direction)
{
int i;
for (i = 0; i < nents; i++, sg++) {
sg->dma_address = (page_to_phys(sg->page) + sg->offset)
| BPA_DMA_VALID;
sg->dma_length = sg->length;
}
return nents;
}
static void bpa_unmap_sg(struct device *hwdev, struct scatterlist *sg,
int nents, enum dma_data_direction direction)
{
}
static int bpa_dma_supported(struct device *dev, u64 mask)
{
return mask < 0x100000000ull;
}
void bpa_init_iommu(void)
{
bpa_map_iommu();
/* Direct I/O, IOMMU off */
ppc_md.iommu_dev_setup = iommu_dev_setup_null;
ppc_md.iommu_bus_setup = iommu_bus_setup_null;
pci_dma_ops.alloc_coherent = bpa_alloc_coherent;
pci_dma_ops.free_coherent = bpa_free_coherent;
pci_dma_ops.map_single = bpa_map_single;
pci_dma_ops.unmap_single = bpa_unmap_single;
pci_dma_ops.map_sg = bpa_map_sg;
pci_dma_ops.unmap_sg = bpa_unmap_sg;
pci_dma_ops.dma_supported = bpa_dma_supported;
}

View file

@ -0,0 +1,65 @@
#ifndef BPA_IOMMU_H
#define BPA_IOMMU_H
/* some constants */
enum {
/* segment table entries */
IOST_VALID_MASK = 0x8000000000000000ul,
IOST_TAG_MASK = 0x3000000000000000ul,
IOST_PT_BASE_MASK = 0x000003fffffff000ul,
IOST_NNPT_MASK = 0x0000000000000fe0ul,
IOST_PS_MASK = 0x000000000000000ful,
IOST_PS_4K = 0x1,
IOST_PS_64K = 0x3,
IOST_PS_1M = 0x5,
IOST_PS_16M = 0x7,
/* iopt tag register */
IOPT_VALID_MASK = 0x0000000200000000ul,
IOPT_TAG_MASK = 0x00000001fffffffful,
/* iopt cache register */
IOPT_PROT_MASK = 0xc000000000000000ul,
IOPT_PROT_NONE = 0x0000000000000000ul,
IOPT_PROT_READ = 0x4000000000000000ul,
IOPT_PROT_WRITE = 0x8000000000000000ul,
IOPT_PROT_RW = 0xc000000000000000ul,
IOPT_COHERENT = 0x2000000000000000ul,
IOPT_ORDER_MASK = 0x1800000000000000ul,
/* order access to same IOID/VC on same address */
IOPT_ORDER_ADDR = 0x0800000000000000ul,
/* similar, but only after a write access */
IOPT_ORDER_WRITES = 0x1000000000000000ul,
/* Order all accesses to same IOID/VC */
IOPT_ORDER_VC = 0x1800000000000000ul,
IOPT_RPN_MASK = 0x000003fffffff000ul,
IOPT_HINT_MASK = 0x0000000000000800ul,
IOPT_IOID_MASK = 0x00000000000007fful,
IOSTO_ENABLE = 0x8000000000000000ul,
IOSTO_ORIGIN = 0x000003fffffff000ul,
IOSTO_HW = 0x0000000000000800ul,
IOSTO_SW = 0x0000000000000400ul,
IOCMD_CONF_TE = 0x0000800000000000ul,
/* memory mapped registers */
IOC_PT_CACHE_DIR = 0x000,
IOC_ST_CACHE_DIR = 0x800,
IOC_PT_CACHE_REG = 0x910,
IOC_ST_ORIGIN = 0x918,
IOC_CONF = 0x930,
/* The high bit needs to be set on every DMA address,
only 2GB are addressable */
BPA_DMA_VALID = 0x80000000,
BPA_DMA_MASK = 0x7fffffff,
};
void bpa_init_iommu(void);
#endif

Some files were not shown because too many files have changed in this diff Show more