You are not logged in. Login Now
 0-24   25-37         
 
Author Message
13 new of 37 responses total.
jdg
response 25 of 37: Mark Unseen   Apr 11 01:18 UTC 1992

I got the 386 today, and loaded Stacker onto it.  Tastes great, less filling.
 
Another person in my department got the same unit but with a 120Meg 2.5"
drive... Conner?  Don't know, but would assume so.  Mine is a 60.. sigh.
bad
response 26 of 37: Mark Unseen   Apr 11 02:12 UTC 1992

Generally, your drive will work better if you do *not* taste it.
Saliva may gum up your drive. 
The manufacturer makes no guarantees as to where the drive has been - 
taste at your own risk.
mistik
response 27 of 37: Mark Unseen   Apr 11 02:15 UTC 1992

Would be interesting if you can run a benchmark comparing the data load
time or program load time on the machine with the bigger disk and the
smaller disk with stacker.  I wonder if it would increase the disk
thruput.
jdg
response 28 of 37: Mark Unseen   Apr 11 14:00 UTC 1992

I have a feeling that data placement would affect the results.  You've got
seek time (head movement) as well as latency (rotational delay) that would
vary the performance.

So the answer would be like John's, "It Depends."
mistik
response 29 of 37: Mark Unseen   Apr 11 18:28 UTC 1992

Since you got a smaller disk, chances are that your seeks will take less
time (not necessarly), and the transfer rate from/to disk will be almost
doubled.  Your disk probably has less cylinders.  Disk organization
can affect it a lot.
mju
response 30 of 37: Mark Unseen   Apr 12 01:17 UTC 1992

Hmm.  One has to be careful with IDE drives when making judgements
based on drive geometry, since the drive geometry you enter into
your CMOS frequently has nothing to do with the actual drive geometry.
It may very well be that the number of physical cylinders has been
reduced; on the other hand, the number of sectors/track may also have
been reduced.  It's hard to say without looking at the drive's spec.
sheet.

Also, keep in mind that the physical disk<->controller transfer time
is unchanged; you are, after all, still using the same disk.  While it
may take less disk time to read the data in compressed format, you also
have to spend CPU time decompressing it.  How fast your CPU is decides
whether you can decompress the data in less time than it would have taken
to read the decompressed data from the drive.

Under Unix or OS/2, Stacker or similar programs would be a very *bad*
idea without a coprocessor board.  This is because unlike MS-DOS,
Unix and OS/2 can do other things with the CPU when they are reading
data from the disk.  Unless, of course, the CPU is required to
decompress the data as it reads it, in which case none of your other
tasks can run while the one that's using the disk is blocked for I/O.

(This is why hard disk controllers that do DMA instead of PIO are
so much better under a multitasking OS, especially if you have a CPU
with an instruction cache.  Because the controller is doing the
data transfer all by itself, without the CPU's involvement, the CPU
can literally go off and do something else while waiting for the
interrupt from the controller.  The only problem is that the controller
has to lock the memory bus when it's doing the DMA transfer, which
means that the CPU can't access the memory.  Which, in turn, is why
an instruction cache is helpful -- if the CPU can operate off the cache
while the controller has the memory bus locked, then it won't need
to go to memory, and thus won't have to wait for the controller to
finish.)
mistik
response 31 of 37: Mark Unseen   Apr 12 02:05 UTC 1992

Yes, that's correct.  I think the result on the laptop would be something
to the effect having slightly faster disk access (running msdos of course),
just a guess though, as you say the actual drive design and caching might
change things a lot.

Under msdos, assuming that the cpu isn't doing anything else while the data
is being shuffled from the disk, and assuming decompression not taking
longer than the data transfer time from the disk, one would expect an
improvement of the effective data transer from/to disk, where probably from
disk would be faster than to disk considering that compressing takes longer
than decompressing.  This simply neglects the seek/search delays.

jdg
response 32 of 37: Mark Unseen   Apr 12 13:04 UTC 1992

Don't forget that data transfer is usually a small component of response
time, compared with seek, latency, reconnect time, etc... Depending on
block size, of course.
mistik
response 33 of 37: Mark Unseen   Apr 12 17:23 UTC 1992

Yes, but presumably, there will be less seeks and connects!
steve
response 34 of 37: Mark Unseen   Apr 16 03:08 UTC 1992

   I recently played with two dos systems that had the same speed 386sx
and identical disks.  One was stacked; I asked to look at the two for
a few before the other got it.  Basically, the speed of the stacked unit
varied from better to worse!  It all depended on the kind data it was
dealing with.  It seemed definately worse unzipping a file, and faster
then trying to load straight ASCII into memoery.  If I had had more time
I would have tried more things.  But I was really surprised to find out
that the stacked unit wasn't consistently slower.
mistik
response 35 of 37: Mark Unseen   Apr 16 03:15 UTC 1992

It might get even better, if you could teach it to store .zip .arc .Z files
as they are (maybe headers would be even better to be checked by stacker.
jdg
response 36 of 37: Mark Unseen   Apr 17 01:18 UTC 1992

Steve, if you wanna play with mine, I've got an 8meg area that's unstacked
and you can do some before-and-after testing if you want.  Bring your
benchmarks on 3.5"...
steve
response 37 of 37: Mark Unseen   Apr 18 03:32 UTC 1992

   might just do that...
 0-24   25-37         
Response Not Possible: You are Not Logged In
 

- Backtalk version 1.3.30 - Copyright 1996-2006, Jan Wolter and Steve Weiss