[cfe-dev] parallel C++

Edward Givelberg via cfe-dev cfe-dev at lists.llvm.org
Tue Nov 27 09:41:22 PST 2018


Troy,

"Multi-core CPUs are a dead end" is not a sensationalist statement.
Multi-core CUs are limited to about a dozen cores. Nobody knows how to
program CPUs with 1000 cores.
Some "religious" people may actually like to program with threads, but
there are at least 20 million programmers who use OO programming. How many
of them enjoy threads or MPI?
Not to mention PGAS....

About remote pointers: my question is specifically why can't we write
Object * some_object = new Object(a, b, c, d);
in C++ where some_object is not an ordinary pointer, but a remote pointer?
It is possible to implement this without breaking existing code.

Could you please provide me with a reference to causal asynchronous
execution?
I think you may very well be right about it, but I searched and I did not
find anything like this.
It seems to me logical that someone thought about it before me, but I'd
like to see and read the work.





On Tue, Nov 27, 2018 at 12:17 PM Troy Johnson <troyj at cray.com> wrote:

> Ø  Multi-core CPUs and all the associated software technologies (shared
> memory, threads, etc) are a technological dead end.
>
>
>
> Wow.
>
>
>
> Ø  I propose to introduce remote pointers into C++. I am very surprised
> nobody thought of this before.
>
>
>
> They have and this is why Jeff pointed you to a variety of related work
> that is not cited in the paper.  Just as one example, UPC has the concept
> of a pointer to shared data that may exist with another thread on another
> node.  Two recent attempts at a PGAS model for C++ (UPC++ and Coarray C++)
> have similar concepts.
>
>
>
> Ø  I am also proposing a new way of executing code, which I call causal
> asynchronous execution.
>
>
>
> Again, that’s also not new.  There are a lot of existing concepts in your
> paper, which isn’t by itself bad – perhaps you are combining them all
> together in some nice new way – but please don’t represent the component
> ideas as brand new.
>
>
>
> Ø  The object-oriented computing framework provides the foundation for a
> new computer architecture, which we call a multiprocessor computer. [From
> the linked paper]
>
>
>
> That really **really** needs a new name.  J
>
>
>
> -Troy
>
>
>
> *From:* cfe-dev <cfe-dev-bounces at lists.llvm.org> *On Behalf Of *Edward
> Givelberg via cfe-dev
> *Sent:* Tuesday, November 27, 2018 10:14 AM
> *To:* jeff.science at gmail.com
> *Cc:* cfe-dev at lists.llvm.org
> *Subject:* Re: [cfe-dev] parallel C++
>
>
>
> Jeff,
>
> Multi-core CPUs and all the associated software technologies (shared
> memory, threads, etc) are a technological dead end.
>
> I argue more than that: all software technologies that use processes
>
> are dead on arrival. This includes the technologies you mention in
>
> your presentation
>
>
> https://www.ixpug.org/images/docs/KAUST_Workshop_2018/IXPUG_Invited2_Hammond.pdf
>
> People got used to processes over decades, so when they talk about
> parallelism they immediately talk about processes, but this is the root of
> the problem. I propose object-level parallelism. An object is more than a
> process. It is a virtual machine.
>
>
>
> I propose to introduce remote pointers into C++. I am very surprised
> nobody thought of this before. I'd be curious to know how much work
>
> people think this would be to do it in LLVM. I know it may be possible to
> introduce something like remote_ptr, but I don't think it is a good idea.
>
>
>
> I am also proposing a new way of executing code, which I call causal
> asynchronous execution. I'd like to know if people find it natural to write
> code like this.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Mon, Nov 26, 2018 at 10:26 PM Jeff Hammond <jeff.science at gmail.com>
> wrote:
>
> I’ll probably have more detailed comments later but the related work you
> may wish to consider includes:
>
> - UPC and Berkeley UPC++
>
> - Charm++
>
> - HPX from LSU
>
> - DASH (http://www.dash-project.org/)
>
> - MADNESS (https://arxiv.org/abs/1507.01888)
>
>
>
> There are quite a few dead parallel C++ dialects from the last millennium
> but it’s probably not worth your time to find and read about them.
>
>
>
> I’m very glad that you used MPI as your communication runtime. This will
> save you lots of pain.
>
>
>
> Jeff
>
>
>
> On Mon, Nov 26, 2018 at 2:57 PM Edward Givelberg via cfe-dev <
> cfe-dev at lists.llvm.org> wrote:
>
>
> Chris Lattner suggested that I post to this mailing list.
>
> I used Clang/LLVM to build a prototype for parallel
> interpretation of C++. It's based on the idea that C++
> objects can be constructed remotely, and accessed via
> remote pointers, without changing the C++ syntax.
> I am a mathematician, not an expert on compilers.
> I am proposing changes to the C++ standard and to the
> compiler architecture, so I'm very interested to hear from
> experts.
> My paper is
>
> https://arxiv.org/abs/1811.09303
> Best regards,
> Ed
>
> -----------------------------------------------------------------
> A solution to the problem of parallel programming
> E. Givelberg
>
> The problem of parallel programming is the most important
> open problem of computer engineering.
> We show that object-oriented languages, such as C++,
> can be interpreted as parallel programming languages,
> and standard sequential programs can be parallelized automatically.
> Parallel C++ code is typically more than ten times shorter than
> the equivalent C++ code with MPI.
> The large reduction in the number of lines of code in parallel C++
> is primarily due to the fact that communications instructions,
> including packing and unpacking of messages, are automatically
> generated in the implementation of object operations.
> We believe that implementation and standardization of parallel
> object-oriented languages will drastically reduce the cost of
> parallel programming.
> his work provides the foundation for building a new computer
> architecture, the multiprocessor computer, including
> an object-oriented operating system and more energy-efficient,
> and easily programmable, parallel hardware architecture.
> The key software component of this architecture is a compiler
> for object-oriented languages.  We describe a novel compiler
> architecture with a dedicated back end for the interconnect fabric,
> making the network a part of a multiprocessor computer,
> rather than a collection of pipes between processor nodes.
> Such a compiler exposes the network hardware features
> to the application, analyzes its network utilization, optimizes the
> application as a whole, and generates the code for the
> interconnect fabric and for the processors.
> Since the information technology sector's electric power consumption
> is very high, and rising rapidly, implementation and widespread
> adoption of multiprocessor computer architecture
> will significantly reduce the world's energy consumption.
>
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
> --
>
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20181127/0232165b/attachment.html>


More information about the cfe-dev mailing list