I am looking at the possibility to use BGL for the structural
representation of an artificial neural network: nodes would be
neurons, edges would be synapses. The advantage for me would be that
I don't have to worry about complexity of the graph structure.
For this to work, I need to be able to strap behaviour and data onto
the edges and nodes. The bundled properties interfaces seems to be
a good way to do this, but there is something fundamental that I am
apparently not understanding: What is an edge descriptor, what is
a vertex descriptor?
From the code:
typedef typename boost::ct_if_t::type vertex_descriptor;
vertex descriptors are indices or pointers. And
typedef detail::edge_desc_impl
edge_descriptor;
and edge is actually a struct containing source and target vertex, and a
property pointer (void*). There does not seem to be a way to get the
property object from an edge descriptor without casting to the
expected type.
Thus, vertex and edge descriptors are really just minimal and
provide absolutely no way to access the relevant data through their
interface. Instead, BGL expects people to have access to the
containing graph object, to be able to call functions like target(),
out_edges(), or operator[] to get at the downstream vertex/edge, or
a vertex'/edge's properties.
In the context of my application, I know have two options: adopt the
graph-centric perspective of BGL, in which the graph object is
needed to answer any questions about the graph, even about single
components therein, or to strap a layer on top of all this to
return to proper object encapsulation:
- where a node knows about its incident edges
- where an edge knows about its source and target vertices
- where both give access to the associated properties.
If I wanted to go the latter way, I'd have to store an additional
8 bytes (descriptor + graph references) with each object, which is
just a waste if you ask me. However, there seems to be no other way
as I cannot derive from vertex/edge descriptors.
Does anyone have any experience with folding BGL into an existing
graph-like structure? My ideal solution would be my neuron and
synapse classes derived from the vertex and edge descriptors, such
that I can treat them like components of the graph as well as neural
components.
From looking at the LEDA stuff, I guess the way to do this is to
modify the graph traits to replace edge and node descriptors with
custom classes. If I take care to model the behaviour of the
existing edge and node descriptors wrt their interfaces, I shouldn't
actually need to override the public Graph API, right?
Is it sensible to derive from detail::edge_desc_impl, or should
detail::* stuff not be touched outside of BGL code?
Thanks for any comments, thoughts, suggestions, pointers.
--
martin; (greetings from the heart of the sun.)
\____ echo mailto: !#^."<*>"|tr "<*> mailto:" net@madduck
invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver!
spamtraps: madduck.bogus@madduck.net
"i have smoked pot. it is a stupid business, like masturbation."
-- thomas pynchon