|
|
| Author |
Message |
steve
|
|
Intel's 50mHz 486's whats the buzz?
|
Mar 8 05:16 UTC 1992 |
Does anyone know what the status is on the 50 mHz iAPX 486?
It was rolled out with much fanfare, in early summer last
year. About 6 weeks later they recalled all 50mHz product,
because a small number of them would randomly reset themselves
when running at top speeds, at temperatres close to the
maximum rating.
This is, I believe the first "recall" of an Intel product,
certianly in the CPU arena. I know that several people who
have 50 mHz Compaq's said "no way!" to their request to send
them back...
So whats the deal this week? Think Intel's tri-metal
process is going to get ironed out? Or will the 66 mHz
effective-speed units get out first?
|
| 21 responses total. |
mcnally
|
|
response 1 of 21:
|
Mar 11 06:23 UTC 1992 |
They didn't recall the early 386's with the screwed up math?
|
danr
|
|
response 2 of 21:
|
Mar 11 16:26 UTC 1992 |
re #0: Shouldn't that be 50 MHz?? 50 mHz would be awfully slow. :)
|
steve
|
|
response 3 of 21:
|
Mar 11 23:53 UTC 1992 |
As far as I know, mHz was the correct case--thought that was
weird when I first saw it.
Anyway, Intel has again released the 50mHz 486, and this time
they say it'll work correctly over the entire temperature range.
Gateway 2000 will start selling them next week.
|
mistik
|
|
response 4 of 21:
|
Mar 12 01:24 UTC 1992 |
Wonderfull, any news on solaris ports to 486?
|
steve
|
|
response 5 of 21:
|
Mar 12 04:27 UTC 1992 |
Solaris has done that already. Don't know if they're selling it
or still testing yet.
|
mju
|
|
response 6 of 21:
|
Mar 12 05:49 UTC 1992 |
(Be aware, though, that Solaris 2.0 is nothing like Solaris 1.0, aka
SunOS 4.1. SunOS 4.1/Solaris 1.0 was based on 4.3BSD Unix. Solaris 2.0
is based on SysVr4 Unix. There is a *big* difference. I would not
be surprised to learn that Solaris 2.0 on the SPARC is quite unstable,
at least until they get up to Solaris 2.1/SPARC. Solaris 2.0/i386
should be a bit better, since Interactive (or whatever Sun is calling
the Solaris/i386 division that they purchased from Kodak) has more
experience with SysVr4 on the Intel chip than Sun does with SysVr4
on the SPARC chip.
|
bad
|
|
response 7 of 21:
|
Mar 12 06:01 UTC 1992 |
50 mHz? Geez, what's that...20 seconds per clock cycle? Man.
|
mistik
|
|
response 8 of 21:
|
Mar 12 07:03 UTC 1992 |
Which one is the one that doesn't link files accross file systems?
Do they always have a limit of 65K for a filesystem (maybe inodes??).
|
danr
|
|
response 9 of 21:
|
Mar 12 17:30 UTC 1992 |
re #3,8: MHz is the proper abbreviation. mHz is 1/1000 of a Hertz.
The AA News just made the same mistake, and published my letter
correcting them.
|
steve
|
|
response 10 of 21:
|
Mar 12 18:34 UTC 1992 |
Yeah? OK. MHz it is.
How much is Solaris 2.0?
|
jdg
|
|
response 11 of 21:
|
Mar 13 00:53 UTC 1992 |
re 8: neither, and not always.
|
mistik
|
|
response 12 of 21:
|
Mar 13 01:25 UTC 1992 |
I read on usenet that someone ran out of inodes and had to create another
filesystem. But then, the newssoftware tried to link files accross
filesystems, and failed. That was 386 version, and someone was saying that
the bsd version would not have that problem. Now I don't know if that
refers to the inode numbers or linking accross filesystems.
Does unix let you create files that go accross files /disks/filesystems ?
So far, from reading all the questions answers on unix, it looks like
the bsd version is better. However, there must be a reason for it being not
as popular as the AT&T version. Is that similar to the msdos story or
is there really some tecnical motivation behind it?
|
steve
|
|
response 13 of 21:
|
Mar 13 06:15 UTC 1992 |
I don't know of any version that does. Yet. bsd unix is indeed
popular; if you go back in history in time, a lot of the machines
out there used BSD unix. There hasn't been a migration down twoards
the little machines 'till lately. But thats all changing now.
|
mistik
|
|
response 14 of 21:
|
Mar 13 06:25 UTC 1992 |
What is the deal with inodes. Is it something like file allocation table
entries? Would that change if one used a defragmenter? I understand
even then there would be a limit to the count of files on the filesystem.
|
mju
|
|
response 15 of 21:
|
Mar 13 22:49 UTC 1992 |
An inode is the reference to a physical file -- it stores the
information about the file, such as its permissions, its owner,
which disk blocks belong to it, etc. Directory entries just contain
the name of the file and the inode number. You can "link" files by
making more than one directory entry that references the same inode
number.
Unfortunately inode numbers are not unique across filesystems, which
is why cross-device links didn't work until recently.
A "symbolic link" is a bit different. Symlinks (as they're usually called)
consist of a special type of file, which contains the *filename* (instead
of the inode number) of the real file to access. So, using symlinks,
you can link to any file you can reference by pathname -- directories,
files on other filesystems, even files on filesystems that are NFS-
mounted from other hosts on the network.
The 65K inode limit comes from the fact that on SysVr3 and below
systems, a 16-bit field was used to store the inode number.
Naturally, this means that you can't have an inode number bigger
than 2^16-1, or 65,535. Newer systems use 32-bit inode numbers,
so they have a much larger limit on the number of inodes per filesystem.
|
mistik
|
|
response 16 of 21:
|
Mar 13 23:47 UTC 1992 |
Very enlightening, thank you.
|
mdw
|
|
response 17 of 21:
|
Mar 16 19:04 UTC 1992 |
There are some versions of Unix that will allow you to create
filesystems that span multiple drives. Historically, such games have
been limited to unusual applications like the telephone directory
assistance program.
|
mistik
|
|
response 18 of 21:
|
Mar 16 20:39 UTC 1992 |
How do the distributed file systems link to this subject (altough I am not
familiar with nsf-net, etc)? Do they have to be linked over ethernet?
What if you fake it, that is let it think that there are actually two
fileservers connected thru a fake ?socket? which connects input with
output and vice versa.
|
mju
|
|
response 19 of 21:
|
Mar 17 05:31 UTC 1992 |
Well, the particular variety of distributed filesystem most people are
talking about here is Sun's NFS, or Network File System. NFS
operates over UDP, which operates over IP, which operates over just
about any kind of networking medium you want to put it on. Ethernet
is by no means required; there are people running NFS over token-ring,
FDDI, and even dialup PPP and SLIP links.
You could, I suppose, use NFS to mount a local filesystem, and have
the system be none the wiser. Dunno why you'd want to, though.
(Just do something like "mount localhost:/ /mnt" to get a mirror
of your root filesystem in /mnt. Interesting Question: What does
/mnt/mnt contain at this point?)
|
mdw
|
|
response 20 of 21:
|
Mar 17 07:01 UTC 1992 |
If I recall, NFS doesn't follow mount points, so /mnt/mnt should be an
empty directory.
AFS also uses UDP. The mechanics inside the protocol are a bit
different. NFS uses something called Sun RPC, and is specifically
designed to be stateless. In theory, the file server can crash at any
point, come up "some time later", and if the client is still trying,
will pick right up where it left off. "In theory" - there are some
interesting practical problems. AFS uses something called RX, and
maintains state information on the file server - while you are in the
process of writing the file, the "official" copy of the file lives on
your workstation. If you read a read/write file, the file server
remembers you read the file, and maintains a table of "call backs" on
that file -- if somebody else writess that file, it will contact your
client and tell it that the cached read-only copy of the file you were
using is now invalid. The overall scheme is more complicated (and you
can get much more interesting errors) - but it does perform a lot better
than NFS. NFS, with 20-30 clients, can do a fairly good job of
saturating a local ethernet. AFS, on the other hand, can do tolerably
well with hundreds of clients, and some of those clients can be located
at some distance from the file server - such as across the country.
|
mju
|
|
response 21 of 21:
|
Mar 19 10:58 UTC 1992 |
One of the bigger problems with NFS -- other than network saturation --
is error-correction. Since NFS was designed to be run over Ethernets
or similar local-area networking media, there was never any attempt
made to build error-detection or -correction into the NFS or RPC
protocols. (Ethernet, you see, already has error-detection as part of
the Ethernet frame. So if you get a mangled Ethernet frame, it never
gets passed up to the higher parts of the protocol stack, and so NFS
eventually retransmits it.) However, now that people are trying to
run NFS over wide-area links that may not be 100% error-free (at least
from NFS's point of view), it becomes a problem. It becomes even more
of a problem when people turn off UDP checksums, which are (in the
absence of something at the hardware level, like Ethernet frame
checksums) the only error-detection you've got.
|