Commit graph

19 commits

Author SHA1 Message Date
David S. Miller 64658743fd [SPARC64]: Remove most limitations to kernel image size.
Currently kernel images are limited to 8MB in size, and this causes
problems especially when enabling features that take up a lot of
kernel image space such as lockdep.

The code now will align the kernel image size up to 4MB and map that
many locked TLB entries.  So, the only practical limitation is the
number of available locked TLB entries which is 16 on Cheetah and 64
on pre-Cheetah sparc64 cpus.  Niagara cpus don't actually have hw
locked TLB entry support.  Rather, the hypervisor transparently
provides support for "locked" TLB entries since it runs with physical
addressing and does the initial TLB miss processing.

Fully utilizing this change requires some help from SILO, a patch for
which will be submitted to the maintainer.  Essentially, SILO will
only currently map up to 8MB for the kernel image and that needs to be
increased.

Note that neither this patch nor the SILO bits will help with network
booting.  The openfirmware code will only map up to a certain amount
of kernel image during a network boot and there isn't much we can to
about that other than to implemented a layered network booting
facility.  Solaris has this, and calls it "wanboot" and we may
implement something similar at some point.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-21 17:01:38 -07:00
Sam Ravnborg 0f7f22d9a4 [SPARC64]: Fix cpu trampoline et al. mismatch warnings.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-20 22:22:16 -08:00
David S. Miller 301feb6524 [SPARC64]: Fix lockdep, particularly on SMP.
As noted by Al Viro, when we try to call prom_set_trap_table()
in the SMP trampoline code we try to take the PROM call spinlock
which doesn't work because the current thread pointer isn't
valid yet and lockdep depends upon that being correct.

Furthermore, we cannot set the current thread pointer register
because it can't be properly dereferenced until we return from
prom_set_trap_table().  Kernel TLB misses only work after that
call.

So do the PROM call to set the trap table directly instead of
going through the OBP library C code, and thus avoid the lock
altogether.

These calls are guarenteed to be serialized fully.

Since there are now no calls to the prom_set_trap_table{_sun4v}()
library functions, they can be deleted.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-09-16 11:51:15 -07:00
David S. Miller 7dc408808a [SPARC64]: SMP trampoline needs to avoid %tick_cmpr on sun4v too.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-08-16 01:56:00 -07:00
David S. Miller b434e71933 [SPARC64]: Fix memory leak when cpu hotplugging.
Every time a cpu is added via hotplug, we allocate the per-cpu MONDO
queues but we never free them up.  Freeing isn't easy since the first
cpu gets this memory from bootmem.

Therefore, the simplest thing to do to fix this bug is to allocate the
queues for all possible cpus at boot time.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-08-08 17:33:52 -07:00
David S. Miller 72aff53f1f [SPARC64]: Get SUN4V SMP working.
The sibling cpu bringup is extremely fragile.  We can only
perform the most basic calls until we take over the trap
table from the firmware/hypervisor on the new cpu.

This means no accesses to %g4, %g5, %g6 since those can't be
TLB translated without our trap handlers.

In order to achieve this:

1) Change sun4v_init_mondo_queues() so that it can operate in
   several modes.

   It can allocate the queues, or install them in the current
   processor, or both.

   The boot cpu does both in it's call early on.

   Later, the boot cpu allocates the sibling cpu queue, starts
   the sibling cpu, then the sibling cpu loads them in.

2) init_cur_cpu_trap() is changed to take the current_thread_info()
   as an argument instead of reading %g6 directly on the current
   cpu.

3) Create a trampoline stack for the sibling cpus.  We do our basic
   kernel calls using this stack, which is locked into the kernel
   image, then go to our proper thread stack after taking over the
   trap table.

4) While we are in this delicate startup state, we put 0xdeadbeef
   into %g4/%g5/%g6 in order to catch accidental accesses.

5) On the final prom_set_trap_table*() call, we put &init_thread_union
   into %g6.  This is a hack to make prom_world(0) work.  All that
   wants to do is restore the %asi register using
   get_thread_current_ds().

Longer term we should just do the OBP calls to set the trap table by
hand just like we do for everything else.  This would avoid that silly
prom_world(0) issue, then we can remove the init_thread_union hack.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:13:22 -08:00
David S. Miller 3af6e01e9a [SPARC64]: arch/sparc64/kernel/trampoline.S needs asm/cpudata.h
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:52 -08:00
David S. Miller b5a37e96b8 [SPARC64]: Fix mondo queue allocations.
We have to use bootmem during init_IRQ and page alloc
for sibling cpu calls.

Also, fix incorrect hypervisor call return value
checks in the hypervisor SMP cpu mondo send code.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:26 -08:00
David S. Miller 12eaa328f9 [SPARC64]: Use ASI_SCRATCHPAD address 0x0 properly.
This is where the virtual address of the fault status
area belongs.

To set it up we don't make a hypervisor call, instead
we call OBP's SUNW,set-trap-table with the real address
of the fault status area as the second argument.  And
right before that call we write the virtual address into
ASI_SCRATCHPAD vaddr 0x0.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:15 -08:00
David S. Miller 164c220fa3 [SPARC64]: Fix hypervisor call arg passing.
Function goes in %o5, args go in %o0 --> %o5.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:14 -08:00
David S. Miller d82ace7dc4 [SPARC64]: Detect sun4v early in boot process.
We look for "SUNW,sun4v" in the 'compatible' property
of the root OBP device tree node.

Protect every %ver register access, to make sure it is
not touched on sun4v, as %ver is hyperprivileged there.

Lock kernel TLB entries using hypervisor calls instead of
calls into OBP.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:03 -08:00
David S. Miller ac29c11d4c [SPARC64]: Allocate and register the 4 sun4v mondo queues at bootup.
Needs to occur before we enable PSTATE_IE in %pstate.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:00 -08:00
David S. Miller 8b11bd12af [SPARC64]: Patch up mmu context register writes for sun4v.
sun4v uses ASI_MMU instead of ASI_DMMU

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:56 -08:00
David S. Miller 56fb4df6da [SPARC64]: Elminate all usage of hard-coded trap globals.
UltraSPARC has special sets of global registers which are switched to
for certain trap types.  There is one set for MMU related traps, one
set of Interrupt Vector processing, and another set (called the
Alternate globals) for all other trap types.

For what seems like forever we've hard coded the values in some of
these trap registers.  Some examples include:

1) Interrupt Vector global %g6 holds current processors interrupt
   work struct where received interrupts are managed for IRQ handler
   dispatch.

2) MMU global %g7 holds the base of the page tables of the currently
   active address space.

3) Alternate global %g6 held the current_thread_info() value.

Such hardcoding has resulted in some serious issues in many areas.
There are some code sequences where having another register available
would help clean up the implementation.  Taking traps such as
cross-calls from the OBP firmware requires some trick code sequences
wherein we have to save away and restore all of the special sets of
global registers when we enter/exit OBP.

We were also using the IMMU TSB register on SMP to hold the per-cpu
area base address, which doesn't work any longer now that we actually
use the TSB facility of the cpu.

The implementation is pretty straight forward.  One tricky bit is
getting the current processor ID as that is different on different cpu
variants.  We use a stub with a fancy calling convention which we
patch at boot time.  The calling convention is that the stub is
branched to and the (PC - 4) to return to is in register %g1.  The cpu
number is left in %g6.  This stub can be invoked by using the
__GET_CPUID macro.

We use an array of per-cpu trap state to store the current thread and
physical address of the current address space's page tables.  The
TRAP_LOAD_THREAD_REG loads %g6 with the current thread from this
table, it uses __GET_CPUID and also clobbers %g1.

TRAP_LOAD_IRQ_WORK is used by the interrupt vector processing to load
the current processor's IRQ software state into %g6.  It also uses
__GET_CPUID and clobbers %g1.

Finally, TRAP_LOAD_PGD_PHYS loads the physical address base of the
current address space's page tables into %g7, it clobbers %g1 and uses
__GET_CPUID.

Many refinements are possible, as well as some tuning, with this stuff
in place.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:16 -08:00
David S. Miller 74bf4312ff [SPARC64]: Move away from virtual page tables, part 1.
We now use the TSB hardware assist features of the UltraSPARC
MMUs.

SMP is currently knowingly broken, we need to find another place
to store the per-cpu base pointers.  We hid them away in the TSB
base register, and that obviously will not work any more :-)

Another known broken case is non-8KB base page size.

Also noticed that flush_tlb_all() is not referenced anywhere, only
the internal __flush_tlb_all() (local cpu only) is used by the
sparc64 port, so we can get rid of flush_tlb_all().

The kernel gets it's own 8KB TSB (swapper_tsb) and each address space
gets it's own private 8K TSB.  Later we can add code to dynamically
increase the size of per-process TSB as the RSS grows.  An 8KB TSB is
good enough for up to about a 4MB RSS, after which the TSB starts to
incur many capacity and conflict misses.

We even accumulate OBP translations into the kernel TSB.

Another area for refinement is large page size support.  We could use
a secondary address space TSB to handle those.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:13 -08:00
David S. Miller 0835ae0f27 [SPARC64]: Replace cheetah+ code patching with variables.
Instead of code patching to handle the page size fields in
the context registers, just use variables from which we get
the proper values.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-04 15:23:20 -07:00
David S. Miller bff06d5522 [SPARC64]: Rewrite bootup sequence.
Instead of all of this cpu-specific code to remap the kernel
to the correct location, use portable firmware calls to do
this instead.

What we do now is the following in position independant
assembler:

	chosen_node = prom_finddevice("/chosen");
	prom_mmu_ihandle_cache = prom_getint(chosen_node, "mmu");
	vaddr = 4MB_ALIGN(current_text_addr());
	prom_translate(vaddr, &paddr_high, &paddr_low, &mode);
	prom_boot_mapping_mode = mode;
	prom_boot_mapping_phys_high = paddr_high;
	prom_boot_mapping_phys_low = paddr_low;
	prom_map(-1, 8 * 1024 * 1024, KERNBASE, paddr_low);

and that replaces the massive amount of by-hand TLB probing and
programming we used to do here.

The new code should also handle properly the case where the kernel
is mapped at the correct address already (think: future kexec
support).

Consequently, the bulk of remap_kernel() dies as does the entirety
of arch/sparc64/prom/map.S

We try to share some strings in the PROM library with the ones used
at bootup, and while we're here mark input strings to oplib.h routines
with "const" when appropriate.

There are many more simplifications now possible.  For one thing, we
can consolidate the two copies we now have of a lot of cpu setup code
sitting in head.S and trampoline.S.

This is a significant step towards CONFIG_DEBUG_PAGEALLOC support.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 20:11:33 -07:00
David S. Miller b445e26cbf [SPARC64]: Avoid membar instructions in delay slots.
In particular, avoid membar instructions in the delay
slot of a jmpl instruction.

UltraSPARC-I, II, IIi, and IIe have a bug, documented in
the UltraSPARC-IIi User's Manual, Appendix K, Erratum 51

The long and short of it is that if the IMU unit misses
on a branch or jmpl, and there is a store buffer synchronizing
membar in the delay slot, the chip can stop fetching instructions.

If interrupts are enabled or some other trap is enabled, the
chip will unwedge itself, but performance will suffer.

We already had a workaround for this bug in a few spots, but
it's better to have the entire tree sanitized for this rule.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-27 15:42:04 -07:00
Linus Torvalds 1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00