Discussion:
NUMCPU >1 = IEE331A EXCESSIVE DISABLED SPIN LOOP DETECTED
Mike Stramba
2010-05-19 10:38:44 UTC
Permalink
Just for fun, I changed the NUMCPU to 2, when I IPL'd Turnkey MVS
(actually t33su),
the console displayed :

IEE331A EXCESSIVE DISABLED SPIN LOOP DETECTED
WAITING FOR LOCK RELEASE
REPLY U TO CONTINUE SPIN
IEF677I WARNING MESSAGE(S) FOR JOB JES2 ISSUED

- $HASP493 JES2 QUICK-START IS IN PROGRESS

- $HASP412 MAXIMUM OF 1 READER(S) EXCEEDED
IEE041I THE SYSTEM LOG IS NOW ACTIVE

I *thought* MVS would just ignore the second CPU, but apparently it
doesn't ... but it doesn't seem to utilize it "properly" either ?
Jeff Sumner
2010-05-19 12:33:33 UTC
Permalink
reply with "U" a few times. It'll eventually "catch" and work fine. My Ubuntu Linux boxes do it pretty often, but the Macs and Debian don't.

J
Post by Mike Stramba
Just for fun, I changed the NUMCPU to 2, when I IPL'd Turnkey MVS
(actually t33su),
IEE331A EXCESSIVE DISABLED SPIN LOOP DETECTED
WAITING FOR LOCK RELEASE
REPLY U TO CONTINUE SPIN
IEF677I WARNING MESSAGE(S) FOR JOB JES2 ISSUED
- $HASP493 JES2 QUICK-START IS IN PROGRESS
- $HASP412 MAXIMUM OF 1 READER(S) EXCEEDED
IEE041I THE SYSTEM LOG IS NOW ACTIVE
I *thought* MVS would just ignore the second CPU, but apparently it
doesn't ... but it doesn't seem to utilize it "properly" either ?
Rick Fochtman
2010-05-19 14:39:43 UTC
Permalink
--------------------------------<snip>---------------------------------
Post by Mike Stramba
Just for fun, I changed the NUMCPU to 2, when I IPL'd Turnkey MVS
(actually t33su),
IEE331A EXCESSIVE DISABLED SPIN LOOP DETECTED
WAITING FOR LOCK RELEASE
REPLY U TO CONTINUE SPIN
IEF677I WARNING MESSAGE(S) FOR JOB JES2 ISSUED
- $HASP493 JES2 QUICK-START IS IN PROGRESS
- $HASP412 MAXIMUM OF 1 READER(S) EXCEEDED
IEE041I THE SYSTEM LOG IS NOW ACTIVE
I *thought* MVS would just ignore the second CPU, but apparently it
doesn't ... but it doesn't seem to utilize it "properly" either ?
------------------------------<unsnip>--------------------------------
Unless the I/O load exceeds certain user-defined levels, all but one CPU
runs disabled for I/O interrupts. SPIN locks are used to serialize
activities between multiple CPU's in the CEC. What you've seen is not at
all unusual in a multi-processor CEC.

Typically, any processor in the CEC may initiate I/O operations but only
one fields the resulting interrupts. Helps to control I/O delays to
processing by limiting interrupts.

Rick
Mike Stramba
2010-05-19 14:55:47 UTC
Permalink
So MVS38j *will* use multiple cpus ?

Or is that Hercules starting the O.S on the multiple cpus?

The turnkey "stock" config has NUMCPU 1, is that because the multiple
CPU feature didn't exist at the time Turnkey was released, or other
reasons ?
Post by Rick Fochtman
--------------------------------<snip>---------------------------------
Post by Mike Stramba
Just for fun, I changed the NUMCPU to 2, when I IPL'd Turnkey MVS
(actually t33su),
IEE331A EXCESSIVE DISABLED SPIN LOOP DETECTED
WAITING FOR LOCK RELEASE
REPLY U TO CONTINUE SPIN
IEF677I WARNING MESSAGE(S) FOR JOB JES2 ISSUED
- $HASP493 JES2 QUICK-START IS IN PROGRESS
- $HASP412 MAXIMUM OF 1 READER(S) EXCEEDED
IEE041I THE SYSTEM LOG IS NOW ACTIVE
I *thought* MVS would just ignore the second CPU, but apparently it
doesn't ... but it doesn't seem to utilize it "properly" either ?
------------------------------<unsnip>--------------------------------
Unless the I/O load exceeds certain user-defined levels, all but one CPU
runs disabled for I/O interrupts. SPIN locks are used to serialize
activities between multiple CPU's in the CEC. What you've seen is not at
all unusual in a multi-processor CEC.
Typically, any processor in the CEC may initiate I/O operations but only
one fields the resulting interrupts. Helps to control I/O delays to
processing by limiting interrupts.
Rick
Rick Fochtman
2010-05-19 16:16:09 UTC
Permalink
--------------------------------<snip>------------------------------
So MVS38j *will* use multiple cpus ?
-------------------------------<unsnip>-----------------------------
Very nicely. 3.8J would use up to 8 CPU's in a CEC.

---------------------------------<snip>--------------------------
Or is that Hercules starting the O.S on the multiple cpus?
-----------------------------<unsnip>----------------------------
I don't know Hercules well enough to answer that. But on a single-engine
PC, trying to activate a second MVS CPU probably isn't going to serve
any purpose, other than, perhaps, add to confusion factors.

------------------------------<snip>--------------------------
The turnkey "stock" config has NUMCPU 1, is that because the multiple
CPU feature didn't exist at the time Turnkey was released, or other
reasons ?
------------------------------<unsnip>------------------------
MVS has always run on up to at least 8 CPU's, so there may well be other
reasons.

Rick
Jeff Sumner
2010-05-20 12:00:21 UTC
Permalink
With 8 CPU's defined on an 8 core box, I get:


ipl 148
CPU0000: SIGP Initial program reset (07) CPU0001, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0001, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0002, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0002, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0003, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0003, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0004, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0004, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0005, PARM 00000000: CC 0
HHCCP011I CPU0000: Disabled wait state
PSW=000200FF 80050064
(and nothing comes up on any of the x3270 sessions)

Similar things with 4.

I can only boot Turnkey with 2.
Post by Rick Fochtman
--------------------------------<snip>------------------------------
So MVS38j *will* use multiple cpus ?
-------------------------------<unsnip>-----------------------------
Very nicely. 3.8J would use up to 8 CPU's in a CEC.
---------------------------------<snip>--------------------------
Or is that Hercules starting the O.S on the multiple cpus?
-----------------------------<unsnip>----------------------------
I don't know Hercules well enough to answer that. But on a single-engine PC, trying to activate a second MVS CPU probably isn't going to serve any purpose, other than, perhaps, add to confusion factors.
------------------------------<snip>--------------------------
The turnkey "stock" config has NUMCPU 1, is that because the multiple CPU feature didn't exist at the time Turnkey was released, or other reasons ?
------------------------------<unsnip>------------------------
MVS has always run on up to at least 8 CPU's, so there may well be other reasons.
Rick
laddiehanus
2010-05-20 12:29:51 UTC
Permalink
IIRC machines of the era when 3.8 was current only had 2 CPU's max. Even the 3084 which had 4 cpu's had to be partitioned into 2 sides with 2 CPU's each when in 370 mode and SP 1.3 was required.

I think that 3.8 support of 2 cpu's was buggy (but I am not really not sure) and that SE or SP was really needed to get multi cpu support to run well.

The data structures do support 16 look at macros ihalccat and ihapccat in the source code

Laddie
Post by Jeff Sumner
ipl 148
CPU0000: SIGP Initial program reset (07) CPU0001, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0001, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0002, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0002, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0003, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0003, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0004, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0004, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0005, PARM 00000000: CC 0
HHCCP011I CPU0000: Disabled wait state
PSW=000200FF 80050064
(and nothing comes up on any of the x3270 sessions)
Similar things with 4.
I can only boot Turnkey with 2.
Post by Rick Fochtman
--------------------------------<snip>------------------------------
So MVS38j *will* use multiple cpus ?
-------------------------------<unsnip>-----------------------------
Very nicely. 3.8J would use up to 8 CPU's in a CEC.
---------------------------------<snip>--------------------------
Or is that Hercules starting the O.S on the multiple cpus?
-----------------------------<unsnip>----------------------------
I don't know Hercules well enough to answer that. But on a single-engine PC, trying to activate a second MVS CPU probably isn't going to serve any purpose, other than, perhaps, add to confusion factors.
------------------------------<snip>--------------------------
The turnkey "stock" config has NUMCPU 1, is that because the multiple CPU feature didn't exist at the time Turnkey was released, or other reasons ?
------------------------------<unsnip>------------------------
MVS has always run on up to at least 8 CPU's, so there may well be other reasons.
Rick
Wally Mclaughlin
2010-05-20 15:42:50 UTC
Permalink
In 1978, I was working for a service bureau (Datacrown Inc. of Toronto,
Canada) with a 3033 MP and three 168 MP's all running MVS 3.8
(pre-MVS/SE) with no problems.



We were very current on maintenance, so if this is an MVS problem, there
may be PTF's available for this.





Wally Mclaughlin





From: turnkey-mvs-***@public.gmane.org [mailto:turnkey-mvs-***@public.gmane.org]
On Behalf Of laddiehanus
Sent: Thursday, May 20, 2010 8:30 AM
To: turnkey-mvs-***@public.gmane.org
Subject: [turnkey-mvs] Re: NUMCPU >1 = IEE331A EXCESSIVE DISABLED SPIN
LOOP DETECTED





IIRC machines of the era when 3.8 was current only had 2 CPU's max. Even
the 3084 which had 4 cpu's had to be partitioned into 2 sides with 2
CPU's each when in 370 mode and SP 1.3 was required.

I think that 3.8 support of 2 cpu's was buggy (but I am not really not
sure) and that SE or SP was really needed to get multi cpu support to
run well.

The data structures do support 16 look at macros ihalccat and ihapccat
in the source code

Laddie
Post by Jeff Sumner
ipl 148
CPU0000: SIGP Initial program reset (07) CPU0001, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0001, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0002, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0002, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0003, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0003, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0004, PARM 00000000: CC 0
CPU0000: SIGP Restart (06) CPU0004, PARM 00000000: CC 0
CPU0000: SIGP Initial program reset (07) CPU0005, PARM 00000000: CC 0
HHCCP011I CPU0000: Disabled wait state
PSW=000200FF 80050064
(and nothing comes up on any of the x3270 sessions)
Similar things with 4.
I can only boot Turnkey with 2.
Post by Rick Fochtman
--------------------------------<snip>------------------------------
So MVS38j *will* use multiple cpus ?
-------------------------------<unsnip>-----------------------------
Very nicely. 3.8J would use up to 8 CPU's in a CEC.
---------------------------------<snip>--------------------------
Or is that Hercules starting the O.S on the multiple cpus?
-----------------------------<unsnip>----------------------------
I don't know Hercules well enough to answer that. But on a
single-engine PC, trying to activate a second MVS CPU probably isn't
going to serve any purpose, other than, perhaps, add to confusion
factors.
Post by Jeff Sumner
Post by Rick Fochtman
------------------------------<snip>--------------------------
The turnkey "stock" config has NUMCPU 1, is that because the
multiple CPU feature didn't exist at the time Turnkey was released, or
other reasons ?
Post by Jeff Sumner
Post by Rick Fochtman
------------------------------<unsnip>------------------------
MVS has always run on up to at least 8 CPU's, so there may well be other reasons.
Rick
yvette hirth
2010-05-20 15:52:30 UTC
Permalink
Post by Wally Mclaughlin
In 1978, I was working for a service bureau (Datacrown Inc. of Toronto,
Canada) with a 3033 MP and three 168 MP's all running MVS 3.8
(pre-MVS/SE) with no problems.
the 168 MP's were 2 CPUs each. iirc the 3033 MP was 2 as well.

there doesn't seem to be a beef about 2 CPUs on 3.8; the beef seems to
be about > 2 CPUs on 3.8. i use 1 CPU, since it's a "test" system, as
opposed to a "production" system.

yvette hirth
PeterH
2010-05-20 16:16:41 UTC
Permalink
Post by yvette hirth
the 168 MP's were 2 CPUs each. iirc the 3033 MP was 2 as well.
there doesn't seem to be a beef about 2 CPUs on 3.8; the beef seems to
be about > 2 CPUs on 3.8. i use 1 CPU, since it's a "test" system, as
opposed to a "production" system.
The 3890 (Apache) was also two processors, but in one frame, not two
as were used in the then current IBM products and the earlier 580.

The 5890M could accommodate up to 16 processors in the one frame.

The then current IBM CMOS product (TPS, e.g.) had seven processors on
the one (and only) processor card, with six of the S/390 CPUs being
true CPUs (MVS, VS, e.g.) with the remaining CPU performing the
function of the channel processor using firmware which had been
ported from the earlier RISC channel processor to S/390 code for this
application.
Mike Stramba
2010-05-20 16:25:28 UTC
Permalink
...Except for the the "spin" messages, it was spewing, which I had no idea
what they meant ;)
Post by yvette hirth
there doesn't seem to be a beef about 2 CPUs on 3.8
PeterH
2010-05-20 17:01:24 UTC
Permalink
Post by Mike Stramba
..Except for the the "spin" messages, it was spewing, which I had
no idea what they meant ;)
Certain "locks" within MVS are so-called "suspend" locks (LOCAL LOCK,
e.g.) whereas other locks within MVS are so-called "spin" locks
(CHANNEL LOCK, e.g.).

There is only one "spin" lock within MVS which is not part of the
LOCKing system, and that is the disabled lock within IEWFETCH, which
is utilized to ensure that at least 240 bytes of the CONTROL/RLD
record has been read, when FETCH is reading such records, as the
relocation function which FETCH performs is done in a DIE, in
response to a PCI interrupt issued by the channel when the READ
CONTROL/RLD CCW was fetched by the channel.

The DIE "spins" on a byte within the CONTROL/RLD record, and when
that byte changes, the DIE knows that the channel has actually read
the CONTROL/RLD record, and it is safe for the data within that
record to be processed.
Mike Stramba
2010-05-20 17:35:23 UTC
Permalink
"DIE" ??

Umm Device Interrupt Handler ??? ((somewhat ) educated guess)
Post by PeterH
..Except for the the "spin" messages, it was spewing, which I had
no idea what they meant ;)
Certain "locks" within MVS are so-called "suspend" locks (LOCAL LOCK,
e.g.) whereas other locks within MVS are so-called "spin" locks
(CHANNEL LOCK, e.g.).
There is only one "spin" lock within MVS which is not part of the
LOCKing system, and that is the disabled lock within IEWFETCH, which
is utilized to ensure that at least 240 bytes of the CONTROL/RLD
record has been read, when FETCH is reading such records, as the
relocation function which FETCH performs is done in a DIE, in
response to a PCI interrupt issued by the channel when the READ
CONTROL/RLD CCW was fetched by the channel.
The DIE "spins" on a byte within the CONTROL/RLD record, and when
that byte changes, the DIE knows that the channel has actually read
the CONTROL/RLD record, and it is safe for the data within that
record to be processed.
yvette hirth
2010-05-20 17:42:39 UTC
Permalink
Post by Mike Stramba
"DIE" ??
Disabled Interrupt Exit. runs disabled (for interrupts).

yvette hirth
PeterH
2010-05-20 22:21:14 UTC
Permalink
Post by yvette hirth
Post by Mike Stramba
"DIE" ??
Disabled Interrupt Exit. runs disabled (for interrupts).
In this context it is the DIE within Basic IOS, and which replaced
the PCI Appendage in MVT, et. al., and which allows synchronous
(i.e., real-time) operations of I/O devices and the software,
including device drivers and dependent applications.

It doesn't take that long for the header on a CONTROL/RLD record to
be read, but the responding software, IEWFETCH, must be absolutely
certain that the data is in storage before FETCH's DIE begins to
operate on that data (in the relocation routine, in the computation
of the next disk and main storage addresses, and in the changing of a
NOP to a TIC so the channel program may continue without
interruption), hence the "disabled bit spin" is embedded within the
DIE, which exit FETCH passes as a parameter in the IOSB which, in
turn, is passed as a parameter to the STARTIO macro (which, in turn,
invokes IECIOSCN to start the I/O operation).

The design is such that, under most conditions, a single STARTIO
macro can read (and relocate) an entire load module. And, as I/O
operations are, in general, the second most time-consuming operation
after GETMAIN and FREEMAIN, this elaborate design is justified.

FETCH's DIE did not work properly in the early releases of MVS,
leading to FETCH operations missing many PCIs, and, hence taking many
disk revolutions to load even relatively simple load modules.

CTC operations and the JESes use a DIE, too.
Rick Fochtman
2010-05-20 21:14:34 UTC
Permalink
-----------------------<snip>----------------------------
Post by Mike Stramba
"DIE" ??
-------------------------<unsnip>---------------------------
Disabled Interrupt Exit

Rick
Gerhard Postpischil
2010-05-23 07:38:05 UTC
Permalink
Post by Mike Stramba
...Except for the the "spin" messages, it was spewing, which I had no
idea what they meant ;)
Several years ago I tried NUMCPU 2 (for a 4381), and got weird
hangs and wait states. I tried it earlier today, and it works
like a charm - no spin loops. Perhaps the "secret" is to attach
devices only to CPU 0, as was the case for real 43x1 machines
(support for multiple I/O processors came in with MVS/XA).

Gerhard Postpischil
Bradford, VT
Mike Stramba
2010-05-23 08:04:06 UTC
Permalink
Gerhard ,
Post by Gerhard Postpischil
Perhaps the "secret" is to attach
devices only to CPU 0, as was the case for real 43x1 machines
(support for multiple I/O processors came in with MVS/XA)
How do you do that from the config file?

I don't see any cpu settings in the device definition documenation

Mike
Post by Gerhard Postpischil
Post by Mike Stramba
...Except for the the "spin" messages, it was spewing, which I had no
idea what they meant ;)
Several years ago I tried NUMCPU 2 (for a 4381), and got weird
hangs and wait states. I tried it earlier today, and it works
like a charm - no spin loops. Perhaps the "secret" is to attach
devices only to CPU 0, as was the case for real 43x1 machines
(support for multiple I/O processors came in with MVS/XA).
Gerhard Postpischil
Bradford, VT
Gerhard Postpischil
2010-05-23 17:17:42 UTC
Permalink
Post by Mike Stramba
How do you do that from the config file?
I could have sworn I saw a parameter in the conf description,
but I just reread the user guide, and couldn't anything. But the
CPU specification option for a controller is also available as
part of an MVS sysgen? (it's been a long time)

I got a reminder about the CPU vs I/O processor on our 4381. We
had four channels on each CPU, running SP 1.3, and I made the
mistake of letting one of my employees handle a move, device
configuration, and sysgen. He know that load balancing was
important, so he defined our NCR/Comten 3695 on channel 0, but
on CPU 2. So whenever a task requested an I/O, it was routed to
CPU 0 for processing. CPU 0 shipped the request for the Comten
to CPU 2, which issued the I/O. The response came back on CPU 2,
which had to ship it to 0, ad nauseam. The problem came to my
attention because on the first day of production, user's remote
JES lines kept dropping on time-out errors. I switched cables to
CPU 0, ran a quick I/O gen, and the problem disappeared.

And I was wrong about running with 2 CPUs. I set the number in
the conf file, changed it in HercGUI, and IPLed (twice so far)
then wrote a little program to format the LCCA and PCCA, only to
find out I only have one CPU active - so I'll go back into my
corner and work on Wylbur...


Gerhard Postpischil
Bradford, VT
Dave Wade
2010-05-23 08:37:48 UTC
Permalink
-----Original Message-----
Sent: 23 May 2010 08:38
Subject: Re: [turnkey-mvs] Re: NUMCPU >1 = IEE331A EXCESSIVE
DISABLED SPIN LOOP DETECTED
Post by Mike Stramba
...Except for the the "spin" messages, it was spewing,
which I had
Post by Mike Stramba
no idea what they meant ;)
Several years ago I tried NUMCPU 2 (for a 4381), and got weird
hangs and wait states. I tried it earlier today, and it works
like a charm - no spin loops. Perhaps the "secret" is to attach
devices only to CPU 0, as was the case for real 43x1 machines
(support for multiple I/O processors came in with MVS/XA).
We had a real 4381 and I am sure it had devices on both channel sets. I
think the trick was they had to match...
Gerhard Postpischil
Bradford, VT
Tony Harminc
2010-05-22 03:35:10 UTC
Permalink
In 1978, I was working for a service bureau (Datacrown Inc. of Toronto, Canada) with a 3033 MP and three 168 MP's all running MVS 3.8 (pre-MVS/SE) with no problems.
Hmmm... That name rings a bell. You're not half of the TSS team, are you?

Tony H.
Wally Mclaughlin
2010-05-28 02:59:42 UTC
Permalink
Tony,



Yes, you got me.



Mark Kolb and I created Top Secret Security in 1981. We went to school
together, and both worked at Datacrown before TSS.



Wally





From: turnkey-mvs-***@public.gmane.org [mailto:turnkey-mvs-***@public.gmane.org]
On Behalf Of Tony Harminc
Sent: Friday, May 21, 2010 11:35 PM
To: turnkey-mvs-***@public.gmane.org
Subject: Re: [turnkey-mvs] Re: NUMCPU >1 = IEE331A EXCESSIVE DISABLED
SPIN LOOP DETECTED
Post by Wally Mclaughlin
In 1978, I was working for a service bureau (Datacrown Inc. of
Toronto, Canada) with a 3033 MP and three 168 MP's all running MVS 3.8
(pre-MVS/SE) with no problems.

Hmmm... That name rings a bell. You're not half of the TSS team, are
you?

Tony H.
Roger Bowler
2010-05-28 20:17:54 UTC
Permalink
Post by Wally Mclaughlin
Yes, you got me.
Mark Kolb and I created Top Secret Security in 1981. We went to school
together, and both worked at Datacrown before TSS.
It's great to see so many venerable heroes of mainframe history
congregating here.
--
Cordialement,
Roger Bowler

roger.bowler-***@public.gmane.org
http://perso.wanadoo.fr/rbowler
Hercules "the people's mainframe"
Tony Harminc
2010-05-29 00:01:22 UTC
Permalink
Post by Wally Mclaughlin
Tony,
Yes, you got me.
Mark Kolb and I created Top Secret Security in 1981. We went to school together, and both worked at Datacrown before TSS.
I seem to remember a time when you guys answered the support line
directly. I was at Gulf in Toronto at that time, early 1980s. But I
think you were already in Ohio or wherever it was. Who would've
thought back then that TSS and ACF2 would end up at CA... And for that
matter, that I'd still be working with both.

Well, enough nostalgia - back to work everyone!

Tony H.

paoloG
2010-05-20 18:38:10 UTC
Permalink
Post by laddiehanus
IIRC machines of the era when 3.8 was current only had 2 CPU's max. Even the 3084 which had 4 cpu's had to be partitioned into 2 sides with 2 CPU's each when in 370 mode and SP 1.3 was required.
I think that 3.8 support of 2 cpu's was buggy (but I am not really not sure) and that SE or SP was really needed to get multi cpu support to run well.
The data structures do support 16 look at macros ihalccat and ihapccat in the source code
Laddie
The 3084 with 4 cpu's was supported in MP mode only by MVS/XA; and it was the first IBM mainframe to have more than 2 processors (in the year 1982 IIRC).

So I guess than MVS 3.8 never supported more than 2 CPU's (as you correctly said).

Regards.

Paul
Tony Harminc
2010-05-22 03:40:04 UTC
Permalink
Post by Mike Stramba
So MVS38j *will* use multiple cpus ?
It will use up to 2.
Post by Mike Stramba
Or is that Hercules starting the O.S on the multiple cpus?
I don't understand what that means.
Post by Mike Stramba
The turnkey "stock" config has NUMCPU 1,  is that because the multiple
CPU feature didn't exist at the time Turnkey was released, or other
reasons ?
The first release of MVS, i.e. 2.0, supported up to 16 CPUs, though
it's not clear if it was ever tested beyond running it in the custom
VM system that existed for its development.

Some time between 2.0 and 3.8 (3.0, iirc) the code to support more
than 2 CPUs was effectively removed, though the control blocks
continue to support up to 16.

Tony H.
Loading...