'5mb External Hard Drive'

pxpaulx

Mu-43 All-Pro
Joined
Jan 19, 2010
Messages
1,270
Location
Midwest
Real Name
Paul
Forget RAW files, that thing could barely stomach half a jpeg nowadays, hahaha.
 

speedandstyle

Mu-43 Hall of Famer
Joined
Nov 29, 2011
Messages
2,477
Location
Roswell NM yes that Roswell!
First digital camera!
kodak_digicam.jpg
Subscribe to see EXIF info for this image (if available)
 

Brian S

Mu-43 Top Veteran
Joined
Apr 11, 2009
Messages
714
The computer that I used in 1979 had 4MBytes of memory. It took 128,000 chips to make that 4MBytes. And three chillers for water-cooled electronics.

It also cost $8M. And was big. Very, Very Big. As in a Warehouse sized building. 28" disk platters for Head-Per-track disk drives.

and my 1967 Calculator is pretty big, too.
 

lenshoarder

Mu-43 All-Pro
Joined
Nov 7, 2010
Messages
1,325
I remember back in the 80's we had the first Terabyte of spinning storage. It took up a entire floor of the building.

I still have my Apple 5mb profiles. Yes, they still work.
 

G1 User

Mu-43 Veteran
Joined
Jul 20, 2010
Messages
411
A 32gig card will hold 10tb by then, Raw files will be 150mb and we will have a m4/3 sensor with 100mb's. Our Computers will have a 30ghz 8 core CPU with L1 = 1tb, and L2 = 512gb... with a 1mb quad-bus, 512gb ram,

Zoom, Zoom Zoom Weeeeeeeeeeeeeeeeeeeee
 

Promit

Mu-43 All-Pro
Joined
Jun 6, 2011
Messages
1,820
Location
Baltimore, MD
Real Name
Promit Roy
So... what's the outlook for the next 10yrs..? :rofl:
I'm going to answer seriously.

Moore's Law is currently running at about an 18-24 month cycle, let's go conservative and round it off to 5 cycles. Each cycle represents a doubling of transistor density, more or less. 2^5 gives us a 32x outlook on pretty much everything.

Our common cards right now are 16 GB, so that puts us at 512 GB for a common card and 2-4 TB on the high end. Incredibly dirt cheap cards in the pathetic 128 GB range. 4 GB is about the standard on computer memory right now, this jumps us to about 128 GB and possibly much higher on serious workstations (my work computer is 16 GB, so that jumps us to 512). A typical computer now is running four CPU cores, and I'm loathe to even entertain the thought of a 128-core desktop computer. Makes no sense to me.

And if we take a very conservative 5 years for a pixel doubling on sensors, that suggests that in ten years our micro four thirds size sensors will be pushing about 48 MP. And in fact, 2001 was the era of the 3 megapixel digicam, so that's pretty much dead in line.

Yeah, ten years is a whole new freaking era when it comes to digital technology.


Our Computers will have a 30ghz 8 core CPU with L1 = 1tb, and L2 = 512gb
Actually due to non-transistor-related limitations, L1 caches don't grow too fast. The Pentium 2 had a 32 KB L1 cache, and the current Intel Nehalem i7 has a 64 KB L1 cache. In fact, you could get a Pentium 2 in Xeon form with a 2 MB L2 cache, which it turns out is the same as you get on an 8 core Nehalem. We went to colossal L3 caches instead.

And now you all have much more information than you wanted.
 

lenshoarder

Mu-43 All-Pro
Joined
Nov 7, 2010
Messages
1,325
A 32gig card will hold 10tb by then, Raw files will be 150mb and we will have a m4/3 sensor with 100mb's. Our Computers will have a 30ghz 8 core CPU with L1 = 1tb, and L2 = 512gb... with a 1mb quad-bus, 512gb ram,

Zoom, Zoom Zoom Weeeeeeeeeeeeeeeeeeeee


I doubt we will have 30ghz CPUs. There's this little thing called the speed of light that's been a bottleneck. Also, there's a problem with the heat being generated. CPU clocks haven't really gotten that much faster in the last few years. CPUs ran between 2-3Ghz 10 years ago, they run about 2-3Ghz now. They've just gotten more efficient. That's also why we've gone multi-core.
 

lenshoarder

Mu-43 All-Pro
Joined
Nov 7, 2010
Messages
1,325
I'm going to answer seriously.

Moore's Law is currently running at about an 18-24 month cycle, let's go conservative and round it off to 5 cycles. Each cycle represents a doubling of transistor density, more or less. 2^5 gives us a 32x outlook on pretty much everything.

Our common cards right now are 16 GB, so that puts us at 512 GB for a common card and 2-4 TB on the high end. Incredibly dirt cheap cards in the pathetic 128 GB range. 4 GB is about the standard on computer memory right now, this jumps us to about 128 GB and possibly much higher on serious workstations (my work computer is 16 GB, so that jumps us to 512). A typical computer now is running four CPU cores, and I'm loathe to even entertain the thought of a 128-core desktop computer. Makes no sense to me.

Moore's Law started hitting speed bumps a while back. Now it's heading right into a wall. Silicon is pretty much tapped out. The only way to get substantially more performance thru silicon is through parallelism. 128 isn't that many cores. Back in the day I had a 64,000 CPU machine at work and a 4 CPU machine at home. This was back in the early '90s.

The future is in something like quantum computing. Sure, optical computing, 3D chips and diamondnoid substrates might buy silicon a few more revs but it's a mature technology. We need a new revolution to get much beyond what we currently have.

Like most mature tecnologies, the easy stuff has been done and we've plateau'd. Look at airplanes. There were huge gains the first 20-40 years and then we leveled off. What's the difference between a '70s airliner and a new one? Not much. A few single digits percentage points in efficiency.
 

Brian S

Mu-43 Top Veteran
Joined
Apr 11, 2009
Messages
714
I'm hoping for more parallelism in computers. The Texas Instruments "ASC" machine that I used over 30 years ago could process an entire 2-D image with one vector instruction, no loops. It ran a 12.5MHz clock, but could perform 100MFLops/second. That was floating point, and I could perform bit-wise operations on pixels even faster. The parallelism gave a factor of 80 speedup on some operations. The Intel processors have a very limited "vector" instruction set where you move several operands into a wide register and perform the operations in parallel. Now- add in a set of registers that loop across entire images on one instruction, speed up the operation without speeding up the clock.

Some of my image processing software took 30 CPU days on a VAX 11/780 to run. I ended up rewriting code to use the block-move instruction set on it to get a factor of 30 speed-up. I miss having memory-to-memory vector instructions.
 

RobWatson

Mu-43 Hall of Famer
Joined
May 17, 2011
Messages
2,343
Location
Washington - The Evergreen State
A 32gig card will hold 10tb by then, Raw files will be 150mb and we will have a m4/3 sensor with 100mb's. Our Computers will have a 30ghz 8 core CPU with L1 = 1tb, and L2 = 512gb... with a 1mb quad-bus, 512gb ram,

Zoom, Zoom Zoom Weeeeeeeeeeeeeeeeeeeee

And MS Office will be even slower and powerpoint presentations will be 100 GB for a handfull of slides.
 

Promit

Mu-43 All-Pro
Joined
Jun 6, 2011
Messages
1,820
Location
Baltimore, MD
Real Name
Promit Roy
Moore's Law started hitting speed bumps a while back. Now it's heading right into a wall. Silicon is pretty much tapped out.
Please. We've been hearing about the silicon wall for at least a decade and it hasn't materialized yet, with 25nm production only a scant few months away. Yes there's some physical limit, but we're not staring it down yet.
The only way to get substantially more performance thru silicon is through parallelism.
Uh, yeah. Moore's Law includes that, since the original statement covers the cost of transistor production. I changed it to "density" as a sloppy convenience.
128 isn't that many cores. Back in the day I had a 64,000 CPU machine at work and a 4 CPU machine at home. This was back in the early '90s.
I said desktop machine. In modern consumer workloads, we're dealing with tasks that don't tend to utilize even 4-way CPUs well, let alone wider systems. Yes, a lot of the problem is architectural. But frankly I'm wondering whether Moore's law is economically sensible anymore.

Oh well, I guess we invented Ja va Script to spend more cycles on doing less. </troll>

The future is in something like quantum computing.
I don't see a clear "future" in quantum computing except a badass name. It's not even particularly helpful for most types of computing.
We need a new revolution to get much beyond what we currently have.
Maybe, but I'm wondering whether that revolution is hardware or software. Producing a 128-core single computer is already sorta straightforward now. (See slightly high end SPARC server systems.) Programming one...not so easy. Need better software tools.

Performance per watt is turning out to be an interesting evolution though. What if in the future, the average consumer "desktop computer" is just a smartphone on a dock with USB and HDMI? I think that would be much cooler than a 128-core workstation. Based on what I'm seeing from ARM and Atom based systems, we're not very far off from this being a technical possibility.
 

lenshoarder

Mu-43 All-Pro
Joined
Nov 7, 2010
Messages
1,325
Please. We've been hearing about the silicon wall for at least a decade and it hasn't materialized yet, with 25nm production only a scant few months away. Yes there's some physical limit, but we're not staring it down yet.Uh, yeah. Moore's Law includes that, since the original statement covers the cost of transistor production. I changed it to "density" as a sloppy convenience.I said desktop machine. In modern consumer workloads, we're dealing with tasks that don't tend to utilize even 4-way CPUs well, let alone wider systems. Yes, a lot of the problem is architectural. But frankly I'm wondering whether Moore's law is economically sensible anymore.

Ah... that's because we hit the wall a decade ago. As I said, that's why the clockrate hasn't gone much in 10 years. We were 3Ghz 10 years ago, we're 3GHz now. I'd call that a wall. That's also why CPU makers have gone to multicores in the last decade. Because a single core CPU is pretty much tapped out. It's still getting faster, but the rate of increase has leveled off.

My desktop had 4 CPUs almost 20 years ago.

I've been writing parallel code for 25 years. Pretty much everything I've written in that time will suck up as many CPUs as it can get a hold of. So utilizing 4 CPUs is trivial. Using 128 is just as trivial.

I don't see a clear "future" in quantum computing except a badass name. It's not even particularly helpful for most types of computing.Maybe, but I'm wondering whether that revolution is hardware or software. Producing a 128-core single computer is already sorta straightforward now. (See slightly high end SPARC server systems.) Programming one...not so easy. Need better software tools.

I guess we differ in that. I see a clear future from quantum computing. From something as simple as being able to simultaneous compute multiple solutions to something as obvious as using entanglement to get beyond the speed of light which is holding back traditional computing. It's in it's infantcy. How many people predicted that these computing machines would not see a lot of use back in the days of vacuum tubes? One must have vision to see the future. Without that vision, there will be no future.

Programming in parallel is trivial. It's not a matter of tools, it's a matter of mindset. Some people just can't get their heads around it. Those who can... The best word processor in the world won't make everyone a good writer. A good writer gets on just fine with a pen and a piece of paper.
 

lenshoarder

Mu-43 All-Pro
Joined
Nov 7, 2010
Messages
1,325
I'm hoping for more parallelism in computers. The Texas Instruments "ASC" machine that I used over 30 years ago could process an entire 2-D image with one vector instruction, no loops. It ran a 12.5MHz clock, but could perform 100MFLops/second. That was floating point, and I could perform bit-wise operations on pixels even faster. The parallelism gave a factor of 80 speedup on some operations. The Intel processors have a very limited "vector" instruction set where you move several operands into a wide register and perform the operations in parallel. Now- add in a set of registers that loop across entire images on one instruction, speed up the operation without speeding up the clock.

Some of my image processing software took 30 CPU days on a VAX 11/780 to run. I ended up rewriting code to use the block-move instruction set on it to get a factor of 30 speed-up. I miss having memory-to-memory vector instructions.

Vector parallelism has limited application. SIMD instructions are really only a win in an application that applies the same function over and over to large quantities of data. Vector machines do not make good general purpose machines where that's not the case. Today though, most of us have a vector processor in our computers. It's the GPU.
 

Promit

Mu-43 All-Pro
Joined
Jun 6, 2011
Messages
1,820
Location
Baltimore, MD
Real Name
Promit Roy
Ah... that's because we hit the wall a decade ago. As I said, that's why the clockrate hasn't gone much in 10 years. We were 3Ghz 10 years ago, we're 3GHz now. I'd call that a wall.
As it sounds like you generally understand computing, I don't understand why you'd even bring this up. The "doubling" phenomenon described by Moore does not, and never has, referred to clock speeds. It's a completely orthogonal concern.
That's also why CPU makers have gone to multicores in the last decade. Because a single core CPU is pretty much tapped out. It's still getting faster, but the rate of increase has leveled off.
Well, lots of reasons. And while your work apparently benefits greatly from highly parallel execution, normal consumer workloads do not. (Excepting games, which are an ongoing issue.)
I guess we differ in that. I see a clear future from quantum computing. From something as simple as being able to simultaneous compute multiple solutions to something as obvious as using entanglement to get beyond the speed of light which is holding back traditional computing. It's in it's infantcy. How many people predicted that these computing machines would not see a lot of use back in the days of vacuum tubes? One must have vision to see the future. Without that vision, there will be no future.
While I can't comment on the 'speed of light' issue specifically, my understanding of quantum computing is that it theoretically allows NP-complete problems to be solved in P time. Incredibly useful for certain workloads, none of which you're likely to encounter at home. Unless you're in the business of prime factorization, I guess. In any case, I don't know of any theoretical framework where quantum computing helps accelerate ANY operation you're likely to see on a regular basis as a consumer.
Today though, most of us have a vector processor in our computers. It's the GPU.
Emphasized.
 

wlewisiii

Mu-43 Veteran
Joined
Dec 16, 2011
Messages
438
Location
Hayward, WI
Real Name
William B. Lewis
I'm reminded of the IBM PC-XT I had once upon a time. I got it from a pawn shop and it was top of the line - 640kb ram, Hercules Graphics, 1 5 1/4" 360kb floppy, 1 10 mb 5 1/4" full height boat anchor from Seagate. MFM drive controller IIRC. Was a fun machine to run Minix on - multitasking a 8088 with no memory protection! :eek:
 

lenshoarder

Mu-43 All-Pro
Joined
Nov 7, 2010
Messages
1,325
I'm reminded of the IBM PC-XT I had once upon a time. I got it from a pawn shop and it was top of the line - 640kb ram, Hercules Graphics, 1 5 1/4" 360kb floppy, 1 10 mb 5 1/4" full height boat anchor from Seagate. MFM drive controller IIRC. Was a fun machine to run Minix on - multitasking a 8088 with no memory protection! :eek:

Minix lost out to Linux all because of $69. It was ironic, Minix was the low cost contender to UNIX. Linux was the no cost contender to Minix.

I still have my PC-XT. I also have a hoard of IBM keyboards. The best keyboards ever made. I hoarded them so that I would have a backup supply. None have failed me yet.

My favorite old school machine is my PDP-11 runing RSX. I don't know how to describe the feeling. Yes, my Sony Z is like something out of a Sci Fi movie compared to it. But there's just something there.... I think it's called nostalgia.
 

Latest threads

Top Bottom