mpctl(2)mpctl(2)NAMEmpctl() - multiprocessor control
SYNOPSIS
Remarks
Much of the functionality of this capability is highly dependent on the
underlying hardware. An application that uses this system call should
not be expected to be portable across architectures or implementations.
Some hardware platforms support online addition and deletion of proces‐
sors. Due to this capability, processors and locality domains may be
added or deleted while the system is running. Applications should be
written to handle processor IDs and locality domain IDs that dynami‐
cally appear or disappear (for example, sometime after obtaining the
IDs of all the processors in the system an application may try to bind
an LWP to one of those processors - this system call will return an
error if that processor had been deleted).
Processor sets restrict application execution to a designated group of
processors. Some applications may query information about processors
and locality domains available to them, while other applications may
require system-wide information. The interface supports two unique
sets of command requests for these purposes.
Applications using the pthread interfaces should not use this system
call. A special set of routines has been developed for use by pthread
applications. See the pthread_processor_bind_np(3T) manual page for
information on these interfaces.
DESCRIPTION
provides a means of determining how many processors and locality
domains are available in the system, and assigning processes or light‐
weight processes to execute on specific processors or within a specific
locality domain.
A locality domain consists of a related collection of processors, mem‐
ory, and peripheral resources that comprise a fundamental building
block of the system. All processors and peripheral devices in a given
locality domain have equal latency to the memory contained within that
locality domain. Use with the name to see if the ccNUMA functionality
is enabled and available on the system.
Processor sets provide an alternative application scheduling allocation
domain. A processor set comprises an isolated group of processors for
exclusive use by applications assigned to the processor set. Applica‐
tions may use to query about processors and locality domains available
for them to scale and optimize accordingly. Use with name to see if
the processor set functionality is enabled and available on the system.
The call is expected to be used to increase performance in certain
applications, but should not be used to ensure correctness of an appli‐
cation. Specifically, cooperating processes/lightweight processes
should not rely on processor or locality domain assignment in lieu of a
synchronization mechanism (such as semaphores).
Machine Topology Information
Warning: Processor and locality domain IDs are not guaranteed to exist
in numerical order. There may be holes in a sequential list of IDs.
Due to the capability of online addition and deletion of processors on
some platforms, IDs obtained via these interfaces may be invalid at a
later time. Likewise, the number of processors and locality domains in
the system may also change due to processors being added or deleted.
See the section to query machine topology within the application's pro‐
cessor set.
For processor topology use:
The request argument determines the precise action to be taken by and
is one of the following:
This request returns the number of enabled spus (processors) in
the system. It will always be greater than or
equal to 1. The spu and pid arguments are
ignored.
This request returns the ID of the first enabled processor in
the system.
The spu and pid arguments are ignored.
This request returns the ID of the next enabled processor in the
system
after spu. The pid argument is ignored.
Typically, is called to determine the first
spu. is then called in a loop (until the call
returns -1) to determine the IDs of the
remaining spus.
This request returns the ID of the processor the caller
is currently running on (NOT the processor
assignment of the caller). The spu and pid
arguments are ignored.
Warning: The information returned by this sys‐
tem call may be out-of-date arbitrarily soon
after the call completes due to the scheduler
context switching the caller onto a different
processor.
For locality domain topology use:
The request argument determines the precise action to be taken by and
is one of the following:
This request returns the number of active locality domains in
the system.
An active locality domain has at least one
enabled processor in it. The number of active
locality domains in the system will always be
greater than or equal to 1. The ldom and pid
arguments are ignored.
This request returns the ID of the first active locality domain
in the system.
The ldom and pid arguments are ignored.
This request returns the ID of the next active locality domain
in the system
after ldom. The pid argument is ignored.
Typically, is called to determine the first
locality domain. is then called in a loop
(until the call returns -1) to determine the
IDs of the remaining locality domains.
This request returns the ID of the ldom that the caller
is currently running on (NOT the ldom assign‐
ment of the caller). The ldom and pid argu‐
ments are ignored.
Warning: The information returned by this sys‐
tem call may be out-of-date arbitrarily soon
after the call completes due to the scheduler
context switching the caller onto a different
ldom.
This request returns the number of enabled processors in the
locality domain
ldom. The pid argument is ignored.
This request returns the ID of the locality domain containing
processor
spu. The pid argument is ignored.
Proximity Topology Information
All processors in a given locality domain have equal latency to the
memory contained within that locality domain. However, a processor may
have different cache-to-cache access latency to different processors
within its locality domain. The processors with the same cache-to-
cache access latency are said to be proximate to one another and form a
proximity set. A processor's cache-to-cache access latency to a pro‐
cessor within its proximity set is lower compared to a processor not in
its proximity set even within the same locality domain. By definition,
a processor is said to be proximate to itself. The topology of the
processors in a proximity set is called as Proximity Topology.
Proximity Topology is highly dependent on the underlying architecture
of the system. An example of a proximity set and the architecture sup‐
porting it is a set of processors on the same Front Side Bus (FSB) on
systems that use FSBs. Depending on the architecture:
· each processor by itself may be shown in its proximity set
· a subset of processors belonging to a locality domain may be
shown in one proximity set
· all processors in a locality domain may be shown in one prox‐
imity set
Note that there may or may not be more than one proximity set in a
given locality domain.
Some applications that require only a subset of processors in the sys‐
tem may see performance benefit by running on processors in the same
proximity set. This can be achieved by creating a processor set with
processors from the same proximity set and running the application in
this processor set.
For proximity topology use:
The request argument determines the precise action to be taken by and
is one of the following:
This request returns the number of enabled spus (processors) in
the
system that are in the same proximity set as
that of spu. If spu is enabled, the value
returned will be greater than or equal to 1.
Otherwise -1 is returned. The pid argument is
ignored.
This request returns the ID of the first enabled processor in
the system
that is proximate to spu. If spu is enabled,
it will return a valid processor ID. Other‐
wise -1 is returned. The pid argument is
ignored.
This request returns the ID of the next enabled processor in the
system
that is proximate to spu. The pid argument is
ignored.
Typically, is called to determine the first
proximate spu. is then called in a loop
(until the call returns -1) to determine the
IDs of the remaining proximate spus.
This request returns the number of enabled spus (processors) in
the
processor set of the calling thread and that
are in the same proximity set as that of spu.
Even when spu is enabled, the return value
will be 0 if none of the proximate processors
contribute to the processor set of the calling
thread. If spu is not enabled, -1 is
returned. The pid argument is ignored.
This request returns the ID of the first enabled processor which
is in
the processor set of the calling thread and is
proximate to spu. Even when spu is enabled,
the return value will be -1 if none of the
proximate processors contribute to the proces‐
sor set of the calling thread. If spu is not
enabled, -1 is returned. The pid argument is
ignored.
This request returns the ID of the next enabled processor which
is in
the processor set of the calling thread and is
proximate to spu. The pid argument is
ignored.
Typically, is called to determine the first
proximate spu. is then called in a loop
(until the call returns -1) to determine the
IDs of the remaining proximate spus.
Processor Set Information
Warning: Dynamic creation and deletion of processor sets, and dynamic
reassignment of a processor from one processor set to another may
occur. All processors in the system comprise one processor set by
default at boot time until new processor sets are created and config‐
ured by users.
The following command requests return topology information on proces‐
sors and locality domains in the processor set of the calling thread.
Only an enabled processor can be in a processor set. A locality domain
is said to be in a processor set, if it contributes at least one pro‐
cessor to that processor set.
For processor topology use:
The request argument determines the precise action to be taken by and
is one of the following:
This request returns the number of spus (processors) in the pro‐
cessor set of
the calling thread. The spu and pid arguments
are ignored.
This request returns the ID of the first processor in the pro‐
cessor set of
the calling thread. The spu and pid arguments
are ignored.
This request returns the ID of the next processor in the proces‐
sor set of
the calling thread after spu. The pid argu‐
ment is ignored.
Typically, is called to determine the first
spu. is then called in a loop (until the call
returns -1) to determine the IDs of the
remaining spus.
For locality domain topology use:
The request argument determines the precise action to be taken by and
is one of the following:
This request returns the number of locality domains in the pro‐
cessor set of
the calling thread. The ldom and pid argu‐
ments are ignored.
This request returns the ID of the first locality domain in the
processor set
of the calling thread. The ldom and pid argu‐
ments are ignored.
This request returns the ID of the next locality domain in the
processor set
of the calling thread after ldom. The pid
argument is ignored.
Typically, is called to determine the first
locality domain. is then called in a loop
(until the call returns -1) to determine the
IDs of the remaining locality domains.
This request returns the number of processors contributed by the
locality
domain ldom to the processor set of the call‐
ing thread. It may be less than the total
number of processors in the ldom. The pid
argument is ignored.
Processor Socket Information
For processor socket topology use:
The request argument determines the precise action to be taken by and
is one of the following:
This request returns the number of enabled sockets (physical
processors) in
the system. An enabled socket has at least
one core enabled. The value will be greater
than or equal to 1. If the call is not imple‐
mented the value will be -1. The spu and pid
arguments are ignored.
Logical Processor and Processor Core Information
On systems with Hyper-Threading (HT) feature enabled, each processor
core may have more than one hyper-thread per physical processor core.
When hyper-threading is enabled at the firmware level, each hyper-
thread is represented to the operating system and applications as a
logical processor (LCPU). Hence the basic unit of any topology infor‐
mation is a logical processor. However, some applications may want to
get the system topology information at the physical processor core
level.
For processor core topology use:
The request argument determines the precise action to be taken by and
is one of the following:
Returns the number of enabled processor cores in the system;
this value
will always be greater than or equal to 1.
The spu and pid arguments are ignored.
Returns the processor core ID of the first enabled processor
core
in the system. The spu and pid arguments are
ignored.
Returns the processor core ID of the next enabled processor core
in the system after the specified processor
core ID. The pid argument is ignored. Typi‐
cally is called to determine the first proces‐
sor core. is then called in a loop (until the
call returns -1) to determine the IDs of the
remaining processor cores.
Returns the ID of the processor core the calling thread is
currently running on (not the processor core
assignment of the caller). The spu and pid
arguments are ignored.
Returns the ID of the physical processor core containing the
spu. The pid argument is ignored.
Returns the number of processor cores in the processor set
of the calling thread. The spu and pid argu‐
ments are ignored.
Returns the ID of the first processor core in the processor
set of the calling thread. The spu and pid
arguments are ignored.
Returns the ID of the processor core in the processor set of the
calling thread after the processor core speci‐
fied in spu. The pid argument is ignored.
For processor core and locality domain topology use:
The request argument determines the precise action to be taken by and
is one of the following:
Returns the number of enabled processor cores in the locality
domain; this
value will always be greater than or equal to
0. The pid argument is ignored.
Returns the number of enabled processor cores assigned to the
current processor set in the locality domain;
this value will always be greater than or
equal to 0. The pid argument is ignored.
Processor and Locality Domain Binding
Each process shall have a processor and locality domain binding. Each
LWP (lightweight process) shall have a processor and locality domain
binding. The binding assignments for a lightweight process do not have
to match the binding assignments for the process.
Setting the processor or locality domain binding on the process of a
multithreaded process, causes all LWPs (lightweight processes) in the
target process to have their binding assignments changed to what is
specified. However, if any LWP belongs to a different processor set
such that the specified processor or locality domain does not contrib‐
ute to that processor set, the binding assignment for such an LWP is
not changed.
When a process creates another process (via or the child process will
inherit the parent process's binding assignments (NOT the binding
assignments of the creating LWP). The initial LWP in the child process
shall inherit its binding assignments from the child process. LWPs
other than the initial LWP shall inherit their binding assignments from
the creating LWP (unless specified otherwise in the LWP create
attributes).
Processor binding and locality domain binding are mutually exclusive --
only one can be in effect at any time. If locality domain binding is
in effect, the target is allowed to execute on any processor within
that locality domain in its processor set.
Setting the processor or locality domain binding will fail if the tar‐
get processor or locality domain is not in the processor set of the
specified process or LWP.
WARNING: Due to the capability of online addition and deletion of pro‐
cessors on some platforms, processors may go away. If this occurs, any
processes or LWPs bound to a departing processor will be rebound to a
different processor with the same binding type. If the last processor
in a locality domain is removed, any processes or LWPs bound to a
departing locality domain will be rebound to a different locality
domain.
For processor binding use:
The request argument determines the precise action to be taken by and
is one of the following:
This call is advisory. This request asynchronously assigns
process pid to processor spu. The new proces‐
sor assignment is returned.
The pid may be used to refer to the calling
process.
The spu may be passed to read the current
assignment. The spu may be used to break any
specific-processor assignment. This allows
the process to float to any processor.
NOTE: This call is advisory. If the schedul‐
ing policy for a process conflicts with this
processor assignment, the scheduling policy
takes precedence. For example, when a proces‐
sor is ready to choose another process to exe‐
cute, and the highest priority process is
bound to a different processor, that process
will execute on the selecting processor rather
than waiting for the specified processor to
which it was bound.
If the process specified by pid is a multi‐
threaded process, all LWPs (lightweight pro‐
cesses) in the target process with the same
processor set binding as the target process
will have their processor assignment changed
to what is specified. The processor set bind‐
ing takes precedence over processor or local‐
ity domain binding.
This call is identical to
except that the processor binding will take
precedence over the scheduling policy. This
call is synchronous. For example, when a pro‐
cessor is ready to choose another process to
execute, and the highest priority process is
bound to a different processor, that process
will not be selected to execute on the select‐
ing processor, but instead wait for the speci‐
fied processor to which it was bound. The
selecting processor will then choose a lower
priority process to execute on the processor.
NOTE: This option will not guarantee compli‐
ance with POSIX real-time scheduling algo‐
rithms.
If the process specified by pid is a multi‐
threaded process, all LWPs (lightweight pro‐
cesses) in the target process with the same
processor set binding as the target process
will have their processor assignment changed
to what is specified. The processor set bind‐
ing takes precedence over processor or local‐
ity domain binding.
This call is advisory. This request asynchronously assigns
LWP (lightweight process) lwpid to processor
spu. The new processor assignment is
returned. This option can be used to change
the processor assignment of LWPs in any
process.
The lwpid may be used to refer to the calling
LWP.
The spu may be passed to read the current
assignment. The spu may be used to break any
specific-processor assignment. This allows
the LWP to float to any processor.
NOTE: This call is advisory. If the schedul‐
ing policy for a LWP conflicts with this pro‐
cessor assignment, the scheduling policy takes
precedence. For example, when a processor is
ready to choose another LWP to execute, and
the highest priority LWP is bound to a differ‐
ent processor, then the LWP will execute on
the selecting processor rather than waiting
for the specified processor to which it was
bound.
This call is identical to
except that the processor binding will take
precedence over the scheduling policy. This
call is synchronous. For example, when a pro‐
cessor is ready to choose another LWP to exe‐
cute, and the highest priority LWP is bound to
a different processor, that LWP will not be
selected to execute on the selecting proces‐
sor, but instead will wait for the specified
processor to which it was bound. The select‐
ing processor will then choose a lower prior‐
ity LWP to execute on the processor.
NOTE: This option will not guarantee compli‐
ance with POSIX real-time scheduling algo‐
rithms.
For locality domain binding use:
The request argument determines the precise action to be taken by and
is one of the following:
This request synchronously assigns process
pid to locality domain ldom. The process may
now run on any processor within the locality
domain in its processor set. The new locality
domain assignment is returned.
The pid may be used to refer to the calling
process.
The ldom may be passed to read the current
assignment. The ldom may be used to break any
specific-locality domain assignment. This
allows the process to float to any locality
domain.
When a processor in one locality domain is
ready to choose another process to execute,
and the highest priority process is bound to a
different locality domain, that process will
not be selected to execute on the selecting
processor, but instead wait for a processor in
the specified locality domain to which it was
bound. The selecting processor will then
choose a lower priority process to execute on
the processor.
NOTE: This option will not guarantee compli‐
ance with POSIX real-time scheduling algo‐
rithms.
If the process specified by pid is a multi‐
threaded process, all LWPs (lightweight pro‐
cesses) in the target process will have their
locality domain assignment changed to what is
specified. However, if any LWP belongs to a
processor set different from the target
process, and if the specified locality domain
does not contribute any processor to that
locality domain, the binding assignment of
such an LWP is not changed.
This request synchronously assigns LWP (lightweight process)
lwpid to locality domain ldom. The LWP may
now run on any processor within the locality
domain. The new locality domain assignment is
returned. This option can be used to change
the locality domain assignment of LWPs in any
process.
The lwpid may be used to refer to the calling
LWP.
The ldom may be passed to read the current
assignment. The ldom may be used to break any
specific-locality domain assignment. This
allows the LWP to float to any locality
domain.
When a processor is ready to choose another
LWP to execute, and the highest priority LWP
is bound to a processor in a different local‐
ity domain, then that LWP will not be selected
to execute on the selecting processor, but
instead will wait for a processor on the
locality domain to which it was bound. The
selecting processor will then choose a lower
priority LWP to execute on the processor.
NOTE: This option will not guarantee compli‐
ance with POSIX real-time scheduling algo‐
rithms.
Obtaining Processor and Locality Domain Binding Type
These options return the current binding type for the specified process
or LWP.
The request argument determines the precise action to be taken by and
is one of the following:
Warning: This call is OBSOLETE and is only provided for back‐
wards
compatibility.
This request returns or to indicate the cur‐
rent binding type of the process specified by
pid. The spu argument is ignored. If the
target process has a binding type of something
other than the value will be returned.
This request returns the current binding type of the process
specified by
pid. The spu argument is ignored.
Current valid return values are (no binding),
(advisory processor binding), (processor bind‐
ing), and (locality domain binding). Other
binding types may be added in future releases
and returned via this option. Applications
using this option should be written to handle
other return values in order to continue work‐
ing on future releases.
Warning: This call is OBSOLETE and is only provided for back‐
wards
compatibility.
This request returns or to indicate the cur‐
rent binding type of the LWP specified by
lwpid. The spu argument is ignored. If the
target LWP has a binding type of something
other than the value will be returned.
This request returns the current binding type of the LWP speci‐
fied by
lwpid. The spu argument is ignored.
Current valid return values are (no binding),
(advisory processor binding), (processor bind‐
ing), and (locality domain binding). Other
binding types may be added in future releases
and returned via this option. Applications
using this option should be written to handle
other return values in order to continue work‐
ing on future releases.
Launch Policies
Each process shall have a launch policy. Each lightweight process
shall have a launch policy. The launch policy for a lightweight
process need not match the launch policy for the process. The launch
policy determines the locality domain where the newly created process
or LWP will be launched in a ccNUMA system. The locality domains cov‐
ered by a process's or LWP's processor set are the available locality
domains.
When a process creates another process (via or the child process will
inherit the parent process's launch policy. The initial LWP in the
child process will inherit the launch policy of the creating LWP (and
not that of its process). Other LWPs in a multi-threaded process
inherit their launch policy from the creating LWP.
For all launch policies, the target process or LWP is bound to the
locality domain on which it was launched. The target is allowed to
execute on any processor within that locality domain.
When setting a launch policy, if the target already has processor or
locality domain binding, the existing binding will not be overwritten.
Instead the locality domain in which the target is bound (whether
locality domain binding or processor binding) will be used as the
starting locality domain for implementing the launch policy.
When setting a process launch policy, the launch policy specified shall
only be applied to the process. The launch policies of LWPs within the
process shall not be affected.
The interface currently supports the following launch policies:
When
a
launch
pol‐
icy
is
set
for
a
process,
it
becomes
the
root
of
a
new
launch
tree.
The
launch
pol‐
icy
deter‐
mines
which
pro‐
cesses
become
part
of
the
launch
tree.
The
new
pro‐
cesses
in
the
launch
tree
will
be
dis‐
trib‐
uted
among
avail‐
able
local‐
ity
domains
based
on
the
launch
pol‐
icy
for
that
launch
tree.
For
and
launch
poli‐
cies,
the
root
process
and
only
its
direct
chil‐
dren
form
the
launch
tree.
The
new
child
process
becomes
the
root
of
a
new
launch
tree.
Since
the
launch
tree
for
these
poli‐
cies
includes
only
the
par‐
ent
and
its
direct
chil‐
dren,
their
dis‐
tri‐
bu‐
tion
will
be
more
deter‐
min‐
is‐
tic.
For
and
launch
poli‐
cies,
any
new
process
cre‐
ated
by
the
root
process
or
any
of
its
descen‐
dents
become
part
of
the
launch
tree.
When
cre‐
at‐
ing
a
new
process
with
these
poli‐
cies,
if
the
root
of
the
launch
tree
has
dif‐
fer‐
ent
launch
pol‐
icy
than
the
cre‐
ator
of
the
new
process,
the
new
process
becomes
the
root
of
a
new
launch
tree.
The
local‐
ity
domains
selected
for
new
pro‐
cesses
in
the
tree
are
depen‐
dent
on
the
order
in
which
they
are
cre‐
ated.
So,
the
process
dis‐
tri‐
bu‐
tion
for
an
appli‐
ca‐
tion
with
sev‐
eral
lev‐
els
in
the
launch
tree
may
vary
across
dif‐
fer‐
ent
runs.
When
the
launch
pol‐
icy
for
a
process
in
a
launch
tree
is
changed,
it
becomes
the
root
of
a
new
launch
tree.
How‐
ever,
the
dis‐
tri‐
bu‐
tion
of
exist‐
ing
pro‐
cesses
in
the
old
launch
tree
is
not
changed.
The
LWP
launch
pol‐
icy
works
the
same
as
process
launch
pol‐
icy
except
that
LWP
launch
tree
is
con‐
tained
within
a
process.
When
an
LWP
with
a
launch
pol‐
icy
cre‐
ates
a
new
process,
the
ini‐
tial
LWP
in
the
new
process
becomes
the
root
of
a
new
LWP
launch
tree.
The
indi‐
cates
there
is
no
explicit
launch
pol‐
icy
for
the
process
or
LWP.
The
oper‐
at‐
ing
sys‐
tem
is
free
to
select
the
opti‐
mal
dis‐
tri‐
bu‐
tion
of
pro‐
cesses
and
LWPs.
No
explicit
local‐
ity
domain
bind‐
ing
is
applied
to
new
pro‐
cesses
and
LWPs
with
pol‐
icy,
unless
it
inher‐
its
the
bind‐
ing
from
the
cre‐
ator
process
or
LWP.
If
the
pro‐
ces‐
sor
set
bind‐
ing
for
a
process
or
an
LWP
in
a
launch
tree
is
changed
to
another
pro‐
ces‐
sor
set,
that
process
or
LWP
becomes
the
root
of
a
new
launch
tree.
When
cre‐
at‐
ing
a
new
process
or
an
LWP,
if
the
root
of
the
launch
tree
is
found
to
be
in
a
dif‐
fer‐
ent
pro‐
ces‐
sor
set,
the
new
process
or
LWP
is
made
the
root
of
a
new
launch
tree.
NOTE:
local‐
ity
domains
are
tightly
tied
to
the
phys‐
i‐
cal
com‐
po‐
nents
of
the
under‐
ly‐
ing
sys‐
tem.
As
a
result,
the
per‐
for‐
mance
observed
when
using
launch
poli‐
cies
based
on
local‐
ity
domains
may
vary
from
sys‐
tem
to
sys‐
tem.
For
exam‐
ple,
a
sys‐
tem
which
con‐
tains
4
local‐
ity
domains,
each
con‐
tain‐
ing
32
pro‐
ces‐
sors,
may
exhibit
dif‐
fer‐
ent
per‐
for‐
mance
behav‐
iors
from
a
sys‐
tem
that
con‐
tains
32
local‐
ity
domains
with
4
pro‐
ces‐
sors
per
domain.
The
launch
pol‐
icy
that
pro‐
vides
opti‐
mal
per‐
for‐
mance
on
one
sys‐
tem
may
not
pro‐
vide
opti‐
mal
per‐
for‐
mance
on
a
dif‐
fer‐
ent
sys‐
tem
for
the
same
appli‐
ca‐
tion.
For
process
launch
poli‐
cies
use:
The
request
argu‐
ment
deter‐
mines
the
pre‐
cise
action
to
be
taken
by
and
is
one
of
the
fol‐
low‐
ing:
MPC_GET‐
PRO‐
CESS_LAUNCH This
request
cur‐
rently
returns
or
to
indi‐
cate
the
cur‐
rent
launch
pol‐
icy
of
the
process
spec‐
i‐
fied
by
pid.
Other
launch
poli‐
cies
may
be
added
in
future
releases
and
returned
via
this
option.
Appli‐
ca‐
tions
using
this
option
should
be
writ‐
ten
to
han‐
dle
other
return
val‐
ues
in
order
to
con‐
tinue
work‐
ing
on
future
releases.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
round
robin
launch
pol‐
icy for
the
spec‐
i‐
fied
process.
The
suc‐
ces‐
sive
child
pro‐
cesses
are
launched
on
dif‐
fer‐
ent
local‐
ity
domains
in
a
round
robin
man‐
ner
until
all
avail‐
able
local‐
ity
domains
have
been
used
by
pro‐
cesses
in
the
launch
tree.
At
that
point,
the
selec‐
tion
of
local‐
ity
domains
begins
again
from
the
orig‐
i‐
nal
local‐
ity
domain.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
fill
first
launch
pol‐
icy for
the
spec‐
i‐
fied
process.
The
suc‐
ces‐
sive
child
pro‐
cesses
are
launched
on
the
same
local‐
ity
domain
as
their
par‐
ent
process
until
one
process
has
been
cre‐
ated
for
each
avail‐
able
pro‐
ces‐
sor
in
the
domain.
At
that
point,
a
new
local‐
ity
domain
is
selected
and
suc‐
ces‐
sive
pro‐
cesses
are
launched
there
until
there
is
one
process
per
pro‐
ces‐
sor.
All
avail‐
able
local‐
ity
domains
will
be
used
before
the
orig‐
i‐
nal
domain
is
selected
again.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
packed
launch
pol‐
icy for
the
spec‐
i‐
fied
process.
The
suc‐
ces‐
sive
child
pro‐
cesses
are
launched
on
the
same
local‐
ity
domain
as
their
par‐
ent
process.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
least
loaded
launch
pol‐
icy for
the
spec‐
i‐
fied
process.
The
suc‐
ces‐
sive
child
pro‐
cesses
are
launched
on
the
least
loaded
local‐
ity
domain
in
the
pro‐
ces‐
sor
set
regard‐
less
of
the
loca‐
tion
of
their
par‐
ent
process.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
tree
based
round
robin
launch
pol‐
icy for
the
spec‐
i‐
fied
process.
This
request
dif‐
fers
from
in
which
pro‐
cesses
become
part
of
the
launch
tree.
This
launch
pol‐
icy
includes
all
descen‐
dents
of
the
tar‐
get
process
in
the
launch
tree.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
tree
based
fill
first
launch
pol‐
icy for
the
spec‐
i‐
fied
process.
This
request
dif‐
fers
from
in
which
pro‐
cesses
become
part
of
the
launch
tree.
This
launch
pol‐
icy
includes
all
descen‐
dents
of
the
tar‐
get
process
in
the
launch
tree.
The
ldom
argu‐
ment
is
ignored.
This
call
unsets
any
launch
pol‐
icy
in
the
process. The
sys‐
tem
will
employ
a
default,
opti‐
mal
pol‐
icy
in
deter‐
min‐
ing
where
the
newly
cre‐
ated
process
will
be
launched.
The
exist‐
ing
bind‐
ing
of
the
process
is
not
changed.
The
ldom
argu‐
ment
is
ignored.
For
LWP
launch
poli‐
cies
use:
The
request
argu‐
ment
deter‐
mines
the
pre‐
cise
action
to
be
taken
by
and
is
one
of
the
fol‐
low‐
ing:
This
request
cur‐
rently
returns or
to
indi‐
cate
the
cur‐
rent
launch
pol‐
icy
of
the
LWP
spec‐
i‐
fied
by
lwpid.
Other
launch
poli‐
cies
may
be
added
in
future
releases
and
returned
via
this
option.
Appli‐
ca‐
tions
using
this
option
should
be
writ‐
ten
to
han‐
dle
other
return
val‐
ues
in
order
to
con‐
tinue
work‐
ing
on
future
releases.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
round
robin
launch
pol‐
icy for
the
spec‐
i‐
fied
LWP.
The
suc‐
ces‐
sive
child
LWPs
are
launched
on
dif‐
fer‐
ent
local‐
ity
domains
in
a
round
robin
man‐
ner
until
all
avail‐
able
local‐
ity
domains
have
been
used
by
LWPs
in
the
launch
tree.
At
that
point,
the
selec‐
tion
of
local‐
ity
domains
begins
again
from
the
orig‐
i‐
nal
local‐
ity
domain.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
fill
first
launch
pol‐
icy for
the
spec‐
i‐
fied
LWP.
The
suc‐
ces‐
sive
child
LWPs
are
launched
on
the
same
local‐
ity
domain
as
their
par‐
ent
LWP
until
one
thread
has
been
cre‐
ated
for
each
avail‐
able
pro‐
ces‐
sor
in
the
domain.
At
that
point,
a
new
local‐
ity
domain
is
selected
and
suc‐
ces‐
sive
LWPs
are
launched
there
until
there
is
one
LWP
per
pro‐
ces‐
sor.
All
avail‐
able
local‐
ity
domains
will
be
used
before
the
orig‐
i‐
nal
domain
is
selected
again.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
packed
launch
pol‐
icy for
the
spec‐
i‐
fied
LWP.
The
suc‐
ces‐
sive
child
LWPs
are
launched
on
the
same
local‐
ity
domain
as
their
par‐
ent
LWP.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
least
loaded
launch
pol‐
icy for
the
spec‐
i‐
fied
LWP.
The
suc‐
ces‐
sive
child
LWPs
are
launched
on
the
least
loaded
local‐
ity
domain
in
the
pro‐
ces‐
sor
set
regard‐
less
of
the
loca‐
tion
of
their
par‐
ent
LWP.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
tree
based
round
robin
launch
pol‐
icy for
the
spec‐
i‐
fied
LWP.
This
request
dif‐
fers
from
in
which
LWPs
become
part
of
the
launch
tree.
This
launch
pol‐
icy
includes
all
descen‐
dents
of
the
tar‐
get
LWP
in
the
launch
tree.
The
ldom
argu‐
ment
is
ignored.
This
call
estab‐
lishes
a
tree
based
fill
first
launch
pol‐
icy
for
the
spec‐
i‐
fied
LWP.
This
request
dif‐
fers
from
in
which
LWPs
become
part
of
the
launch
tree.
This
launch
pol‐
icy
includes
all
descen‐
dents
of
the
tar‐
get
LWP
in
the
launch
tree.
The
ldom
argu‐
ment
is
ignored.
This
call
unsets
any
launch
pol‐
icy
in
the
LWP. The
sys‐
tem
will
employ
a
default,
opti‐
mal
pol‐
icy
in
deter‐
min‐
ing
where
the
newly
cre‐
ated
LWP
will
be
launched.
The
exist‐
ing
bind‐
ing
of
the
LWP
is
not
changed.
The
ldom
argu‐
ment
is
ignored.
To
change
the
pro‐
ces‐
sor
assign‐
ment,
local‐
ity
domain
assign‐
ment,
or
launch
pol‐
icy
of
another
process,
the
call‐
er
must
either
have
the
same
effec‐
tive
user
ID
as
the
tar‐
get
process,
or
have
the
priv‐
i‐
lege.
Secu‐
rity
Restric‐
tions
Some
or
all
of
the
actions
asso‐
ci‐
ated
with
this
sys‐
tem
call
require
the
priv‐
i‐
lege.
Pro‐
cesses
owned
by
the
supe‐
ruser
have
this
priv‐
i‐
lege.
Pro‐
cesses
owned
by
other
users
may
have
this
priv‐
i‐
lege,
depend‐
ing
on
sys‐
tem
con‐
fig‐
u‐
ra‐
tion.
See
priv‐
i‐
leges(5)
for
more
infor‐
ma‐
tion
about
priv‐
i‐
leged
access
on
sys‐
tems
that
sup‐
port
fine-
grained
priv‐
i‐
leges.
RETURN
VAL‐
UES
If
fails,
is
returned.
If
is
suc‐
cess‐
ful,
the
value
returned
is
as
spec‐
i‐
fied
for
that
com‐
mand/option.
NOTE:
In
some
cases
a
neg‐
a‐
tive
num‐
ber
other
than
may
be
returned
that
indi‐
cates
a
suc‐
cess‐
ful
return.
ERRORS
In
gen‐
eral,
fails
if
one
or
more
of
the
fol‐
low‐
ing
is
true:
pid or
lwpid
iden‐
ti‐
fies
a
process
or
LWP
that
is
not
vis‐
i‐
ble
to
the
call‐
ing
thread.
request is
an
ille‐
gal
num‐
ber.
request is
or
and
spu
iden‐
ti‐
fies
the
last
pro‐
ces‐
sor.
Or
request
is
or
and
ldom
iden‐
ti‐
fies
the
last
local‐
ity
domain.
Or
request
is
or
and
spu
iden‐
ti‐
fies
the
last
prox‐
i‐
mate
spu.
request is
or
or
or
and
spu
is
not
enabled.
request is
to
bind
a
process
or
an
LWP
to
a
pro‐
ces‐
sor
or
local‐
ity
domain
that
is
not
in
the
pro‐
ces‐
sor
set
of
the
spec‐
i‐
fied
process
or
LWP.
request is
or
spu
is
not
or
pid
iden‐
ti‐
fies
another
process,
and
the
call‐
er
does
not
have
the
same
effec‐
tive
user
ID
of
the
tar‐
get
process
or
does
not
have
the
priv‐
i‐
lege.
request is
or
pid
iden‐
ti‐
fies
another
process,
and
the
call‐
er
does
not
have
the
same
effec‐
tive
user
ID
of
the
tar‐
get
process,
or
does
not
have
the
priv‐
i‐
lege.
pid or
lwpid
iden‐
ti‐
fies
a
process
or
LWP
that
does
not
exist.
SEE
ALSO
get‐
priv‐
grp(1),
set‐
priv‐
grp(1M),
fork(2),
get‐
priv‐
grp(2),
sysconf(2),
pthread_pro‐
ces‐
sor_bind_np(3T),
pthread_launch_pol‐
icy_np(3T),
priv‐
grp(4),
com‐
part‐
ments(5),
priv‐
i‐
leges(5).
mpctl(2)