[cfe-dev] parallel C++
Oleg Smolsky via cfe-dev
cfe-dev at lists.llvm.org
Wed Nov 28 15:45:54 PST 2018
Ed, it sounds like you have an idea for a new language with a new
execution model and a special object representation. I was merely trying
to point out that these ideas have little to do with what C++ compilers
do today.
Oleg.
On 2018-11-28 15:28, Edward Givelberg wrote:
> Oleg,
>
> May be I am misunderstanding what you're saying...
> Since I am proposing a different framework for execution,
> the architecture which has an abstract machine
> and a memory model will have to change.
> Since I'd like to have remote objects,
> which are native to C++, unlike the existing objects, which are all local,
> I am proposing this IOR layer. Access to objects will have to change.
> An object access will not longer be a memory access, unless some
> compiler optimization determines that the object is local.
> So this probably means that it requires changes to the LLVM IR?
> As I said, I don't know enough about the current LLVM architecture
> to make a detailed plan, but I think it is an interesting problem.
>
> Ed
>
>
> On Wed, Nov 28, 2018 at 5:42 PM Oleg Smolsky <oleg at cohesity.com
> <mailto:oleg at cohesity.com>> wrote:
>
> On 2018-11-28 13:14, Edward Givelberg via cfe-dev wrote:
> >
> > [...]
> > Naively, it seems to me that LLVM is a sequential VM, so perhaps
> its
> > architecture needs be extended.
> > I am proposing an intermediate representation which encodes object
> > operations,
> > let's call it IOR. IOR is translated to interconnect hardware
> > instructions, as well as LLVM's IR.
> > I am proposing a dedicated back end to generate code for the
> > interconnect fabric.
>
> Edward, it sounds to me like you are trying to reinvent Smalltalk.
> Its
> core is really about message passing and perhaps people have made
> attempts to make it parallel already.
>
> On a more serious and specific note, I think you are ignoring the
> "abstract C machine" on which both C and C++ languages are built.
> Fundamentally, objects are laid out in memory (let's ignore the stack
> for now) and are built off primitive and user-defined types. These
> types
> are known (and stable) throughout the compilation process of a single
> program and so are the offsets of various fields that comprise the
> objects. All these objects (and often their sub-objects) can be
> read and
> written anywhere in a single-threaded program. Multi-threaded
> programs
> must be data-race-free, but essentially follow the same model.
>
> The point I am trying to make is that the whole model is built on
> memory
> accesses that are eventually lowered to the ISA. There is no rigid
> protocol for invoking a member function or reading a member
> variable -
> things just happen in the program's address space. And then there is
> code optimizer. The memory accesses (expressed via LLVM IR, for
> example)
> go through various techniques that reduce and eliminate pointless
> work... at which point you have the target's ISA and absolutely no
> notion of a "method" or "object" (as a well-formed program cannot
> tell
> that the code has been re-arranged, reduced, reordered etc).
>
> I suggest that you take a look at https://godbolt.org and see what
> the
> compiler emits with -O3 for a few short class/function templates
> as well
> as normal procedural code.
>
> Oleg.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20181128/f87f3fb6/attachment.html>
More information about the cfe-dev
mailing list