A Supercomputer In Your Pocket

What's been said:

Discussions found on the web:
  1. sooperedd commented on Sep 2

    Cool stuff Barry, Thanks.

  2. dsawy commented on Sep 3

    This is the sort of stuff that makes those of us who know a few things about computer architecture roll our eyes.

    Sure, you can obtain impressive floating-point CPU benchmarks on a modern smart phone. But that’s not what makes a supercomputer.

    Supercomputers had (and have) not only bleeding-edge floating point performance, they had and have IO channels that would move rivers of data in and out of the CPU(s). A smartphone has an IO pathway that’s the size of a garden hose, relatively speaking. And a good rule of thumb for whether a platform is a supercomputer or can be compared to one is this: Is there a competent FORTRAN compiler for it?

    • Moopheus commented on Sep 3

      On the other hand, researchers _have_ constructed supercomputers by basically daisy-chaining consumer-grade PCs, and can do fairly sophisticated modeling with banks of videogame GPUs. Sure, the comparisons on this chart gloss over many technical differences between the items being compared–you can’t easily substitute one for the other in many cases. And the chart pretty much just states the obvious–we have a lot more computer power available to us than in the past, enough that we can carry chunks of it around in our pocket and use it for watching cat videos whenever we like.

    • dsawy commented on Sep 3

      Yes, several of the modern supercomputer architectures are massively parallel arrays or “cubes” of Intel or AMD multi-core CPU’s. But… again, the IO interfaces between these chips+memory stacks aren’t your run-of-the-mill interfaces. The ability to push huge streams of data in/out of the computational elements of a supercomputer are what make it a supercomputer. You can’t take on “huge data” problems without being able to get the “huge data” in and out of your wickedly fast CPU.

      GPU’s are an “on your desk” are an example of a processor that is optimized for a particular problem space – ie, graphics algorithm processing. These chips, in their problem domain, can make a better claim to being a “supercomputer on your desk” than the general purpose CPU processing your text because they also have the interfaces necessary to move huge amounts of data through them at very tidy speeds.

      My point was that the ability to run a floating-point benchmark on one’s cell phone doesn’t make it a supercomputer, any more than putting a blower and nitro induction onto a VW Beetle turns it into a drag racing car. Just having an engine that puts out 2,000 BHP at the crankshaft won’t get you 1/4 mile checkered flags.

    • willid3 commented on Sep 3

      and most PC or MAC computers werent all that fast, they just are what consumers could or would buy. so trying to compare them to a specialized computer like a cray or even mainframes of the same vintage, you find they dont come close. nor do some the more recent hardware. they each are solving different problems.
      and while there have been experiments to create super computers from a gaggle of PCs only works if you have a problem with a small dataset. since you will loose a lot of time just trying to coordinate the systems

    • bigsteve commented on Sep 4

      I am just a hobbyist when it comes to computers. From what I have read Python has replaced Fortran as the language of choice for mathematics. It is not compiled but is intermediate between a compiled and interpreted language. I studies Fortran forty years ago in college and found it a tedious language. Maybe from what you have said Fortran is not completely obsolete yet. Compiled is faster.

  3. Low Budget Dave commented on Sep 3

    When I was in school, and computers were starting to pass 4.7 MHz, one of the teachers did a quick back-of-the envelope calculation about the physical limitation of speed. He assumed away all manufacturing limitations, and asserted that the maximum possible speed was about 3000 MHz (none of us had really pondered the term “GHz”).

    He believed that anything past 3000 MHz, the designers would no longer be able to use semiconductor-based integrated circuits. His theory was that the minimum size for a transistor conducting line was 0.5 nm (about 5 atoms), and the minimum space between lines was twice that. Based on the speed of electrictity through silicon, he decided that 3000 Mhz was the limit of computing speed. (This is an oversimplification, because I don’t understand or remember enough.)

    Although computers have broken that speed limit, using a variety of tricks, I have not heard of anyone who thinks conducting lines can be smaller than 0.5 nm. The distance between lines has been narrowed to a dozen nm (some say) without loss of reliability, so there seems to be room for a at least one order of new manufacturing improvements.

    But after this next set of upgrades, computing power will (maybe) have hit a wall. There are plenty of new tricks, of course, but we might be fresh out of exponential improvements. This is a comforting thought for some people: Although I could be replaced with a computer, it would have to be a very expensive one.

    And while we are at it, the massive reality simulations (theoretically indistinguishable from the real world) might be simply impossible. The bad news: If you have a rent payment due, you should go ahead and pay it.

  4. rd commented on Sep 3

    I think that we are only one or two laptop refreshes away from getting issued either a Microsoft Surface type of tablet or simply a phone that will dock into a monitor and keyboard at work with a rollup screen and folding keyboard for work while travelling.

    When I started working in the early 80s, secretaries had standalone word processors transcribing what I wrote by hand while I did engineering modeling on a computer terminal hooked up to a minicomputer in another city.

    By the late 80s, we were doing most of our own processing on desktop PCs and we could do decent modeling on them as well.

    By the mid-90s, laptops were being issued and many of our analyses that used to take 2 hrs of a minicomputer time, now took 2 min of a laptop time. We also started having e-mail with clients etc. and the Internet started to become relevant.

    Now I often don’t even turn on my laptop at home, because I can keep tabs on things through a smartphone. I only turn on my laptop if I need to see things in detail, such as large documents etc. or do editing/writing. The biggest constriction is now us, namely our ability to resolve details with visual and tactile interfaces, which is why keyboards and monitors will likely be around for a while until we can just type on a table top and hook up something to our glasses for a heads-up view in place of keyboards and monitors..

    • DeDude commented on Sep 3

      I am surprised that we don’t yet have a roll-up full size keyboard with wireless connections (and about the size of a cigar when rolled up/away). That would make phones so much more useful as computers.

Posted Under