|
nxpp
Header-only graph utilities on top of Boost Graph Library
|
This document explains how to read complexity claims in nxpp.
The key distinction is:
nxpp public-call complexity**: the cost of calling the nxpp API, including wrapper-side translation, bookkeeping, and result materializationUse the repository README.md for the short user-facing explanation. Use `API_REFERENCE.md` for per-function tables. Use this file for the policy that connects those two views.
Notation used in this file:
n = |V|, number of verticesm = |E|, number of edgesF_maxflow(n, m), dominant complexity of the delegated Boost max-flow coreF_mincost_cc(n, m), dominant complexity of the delegated Boost cycle-canceling min-cost-flow coreF_ssp(n, m), dominant complexity of the delegated Boost successive shortest-path min-cost-flow corenxpp does not document only the algorithmic core. It documents the cost of the whole public function call.
That means the following wrapper work counts:
NodeID <-> vertex_descriptor translationSo a Boost algorithm and the corresponding nxpp method can have:
There are three useful levels of complexity statements in this project.
This means the wrapper keeps the same dominant asymptotic order as the Boost algorithm it delegates to.
Typical examples:
In these cases, nxpp may still do extra constant-factor or lower-order work, but the dominant asymptotic term stays the same as Boost.
This means the underlying algorithmic phase still has the same order as Boost, but nxpp adds a separate cost to build a user-facing result.
Typical examples:
floyd_warshall_all_pairs_shortest_paths_map()For these APIs, the most honest statement is usually:
...nxpp public call: Boost core + result materializationThat is still compatible with the design goal of preserving Boost's algorithmic order while being honest about wrapper work.
These are operations whose cost is driven mainly by nxpp's own bookkeeping rather than by one delegated Boost algorithm.
Typical examples:
remove_node()For these functions, the relevant guarantee is the nxpp public-call complexity directly, not a Boost comparison.
The repository is moving toward stronger, real complexity bounds where that can be done without needlessly changing the dominant order of Boost algorithms.
Today, the public surface is mixed:
std::map-backed storageNodeID -> vertex_descriptor translation layer and external attribute stores are now orderedSo the right way to read the docs today is:
O(log n) lookup/insert costsO(log n) key lookup but keep the construction cost linear unless the wrapped algorithm itself dominates differentlyFor this project, preserving Boost complexity should be interpreted as:
So these two statements are different:
nxpp aims for the first whenever practical, not automatically the second.
nxpp wrapper work: source/target translation plus path reconstructionnxpp wrapper work: build ordered distance and predecessor mapsnxpp wrapper work: build a key-sorted indexed result without hashingO(log n) key lookup on the returned wrappernxpp wrapper work: group the discovered tree edges into sparse indexed predecessor / successor viewsO(log n) lookup on the returned wrappernxpp wrapper work: use the maintained per-vertex wrapper index propertynxpp wrapper work: convert matrix output into nested mapsnxpp wrapper work: clear mehotadata, rebuild id/descriptor maps, rebuild maintained vertex indicesO(n + m) public operationThe tables below make the side-by-side comparison explicit:
nxpp public-call complexity** is the documented end-to-end cost of the nxpp API callnxpp adds work or deliberately preserves the same dominant orderThese tables are intentionally practical rather than exhaustive proofs. For the complete nxpp API list, see `API_REFERENCE.md`.
When a lower-order wrapper term is fixed and clearly dominated, the table keeps only the dominant term in the nxpp column and moves the lower-order detail into the Note column.
The final column classifies the comparison like this:
yes: same dominant asymptotic order as the Boost counterpartno: the nxpp public call can have a different dominant orderwrapper-managed: there is no clean one-to-one Boost comparison, or the cost is driven mainly by nxpp bookkeepingnxpp function | Boost counterpart | Boost complexity | nxpp complexity | Boost-order status | Note |
|---|---|---|---|---|---|
bfs_edges(start) | boost::breadth_first_search with tree-edge collection | O(n + m) | O(n + m) | yes | nxpp materializes the final vector of discovered tree edges. |
bfs_tree(start) | boost::breadth_first_search + tree construction | O(n + m) | O(n + m) | yes | nxpp builds a new wrapper graph for the BFS tree. |
bfs_successors(start) | boost::breadth_first_search with tree-edge discovery | O(n + m) | O(n + m) | yes | The result is indexed: linear construction, O(log n) key lookup. |
dfs_edges(start) | boost::depth_first_search with tree-edge collection | O(n + m) | O(n + m) | yes | nxpp materializes the final vector of discovered tree edges. |
dfs_tree(start) | boost::depth_first_search + tree construction | O(n + m) | O(n + m) | yes | nxpp builds a new wrapper graph for the DFS tree. |
dfs_predecessors(start) | boost::depth_first_search with predecessor discovery | O(n + m) | O(n + m) | yes | The result is indexed: linear construction, O(log n) key lookup. |
dfs_successors(start) | boost::depth_first_search with tree-edge discovery | O(n + m) | O(n + m) | yes | The result is indexed: linear construction, O(log n) key lookup. |
breadth_first_search(start, visitor) | boost::breadth_first_search | O(n + m) | O(n + m) | yes | The wrapper only adds the initial NodeID -> descriptor translation. |
depth_first_search(start, visitor) | boost::depth_first_search | O(n + m) | O(n + m) | yes | The wrapper only adds the initial NodeID -> descriptor translation. |
nxpp function | Boost counterpart | Boost complexity | nxpp complexity | Boost-order status | Note |
|---|---|---|---|---|---|
shortest_path(source, target) | unweighted Boost shortest-path primitives based on BFS | O(n + m) | O(n + m) | yes | nxpp adds endpoint translation and path reconstruction. |
shortest_path_length(source, target) | unweighted Boost shortest-path primitives based on BFS | O(n + m) | O(n + m) | yes | nxpp adds endpoint translation. |
dijkstra_path(...) | boost::dijkstra_shortest_paths | O((n + m) log n) | O((n + m) log n) | yes | Path reconstruction is bounded by O(n) and stays lower-order than the Dijkstra core. |
dijkstra_path_length(...) | boost::dijkstra_shortest_paths | O((n + m) log n) | O((n + m) log n) | yes | nxpp adds endpoint translation only. |
dijkstra_shortest_paths(source) | boost::dijkstra_shortest_paths | O((n + m) log n) | O((n + m) log n) | yes | Ordered distance / predecessor materialization is O(n log n), which stays lower-order than the Dijkstra core. Path reconstruction is now on-demand through path_to(target). |
bellman_ford_path(...) | boost::bellman_ford_shortest_paths | O(nm) | O(nm) | yes | Path reconstruction is bounded by O(n) and does not change the dominant term. |
bellman_ford_path_length(...) | boost::bellman_ford_shortest_paths | O(nm) | O(nm) | yes | The final accumulation pass is O(L) with L <= n - 1, so it is lower-order and kept only in this note. |
bellman_ford_shortest_paths(source) | boost::bellman_ford_shortest_paths | O(nm) | O(nm) | yes | Ordered distance / predecessor materialization is O(n log n), which stays lower-order than the Bellman-Ford core. Path reconstruction is now on-demand through path_to(target). |
dag_shortest_paths(source) | boost::dag_shortest_paths | O(n + m) | O(n + m) | yes | Ordered distance / predecessor materialization is O(n log n), which stays lower-order than the DAG shortest-path core. Path reconstruction is now on-demand through path_to(target). |
floyd_warshall_all_pairs_shortest_paths() | boost::floyd_warshall_all_pairs_shortest_paths | O(n^3) | O(n^3) | yes | The matrix view preserves the same dominant order. |
floyd_warshall_all_pairs_shortest_paths_map() | boost::floyd_warshall_all_pairs_shortest_paths | O(n^3) | O(n^3) | yes | Converting the matrix into nested ordered maps adds O(n^2 log n), which is lower-order than O(n^3). |
nxpp function | Boost counterpart | Boost complexity | nxpp complexity | Boost-order status | Note |
|---|---|---|---|---|---|
connected_components() | boost::connected_components | O(n + m) | O(n + m) | yes | nxpp builds an indexed NodeID -> component_id result in key order. |
strongly_connected_component_groups() | boost::strong_components | O(n + m) | O(n + m) | yes | nxpp groups SCCs into vectors after the Boost run without changing the dominant term. |
strong_components() | boost::strong_components | O(n + m) | O(n + m) | yes | nxpp builds an indexed NodeID -> SCC root result without adding n log n. |
degree_centrality() | degree accumulation through BGL adjacency iteration | O(n + m) | O(n + m) | yes | nxpp materializes a normalized indexed result while keeping construction linear. |
pagerank() | fixed-iteration power method over BGL adjacency | O(k(n + m)) for k iterations | O(k(n + m)) | yes | nxpp runs 20 iterations and materializes an indexed result; dangling-node handling adds one linear pass per iteration. |
betweenness_centrality() | Brandes algorithm over BGL adjacency | O(nm) unweighted | O(nm + n log n) | yes | nxpp applies Brandes BFS for each source and materializes a normalized indexed result; result construction adds O(n log n) for ordered key insertion. |
nxpp function | Boost counterpart | Boost complexity | nxpp complexity | Boost-order status | Note |
|---|---|---|---|---|---|
kruskal_minimum_spanning_tree() | boost::kruskal_minimum_spanning_tree | O(m log m) | O(m log m) | yes | nxpp materializes MST edges as NodeID pairs. The table uses the standard Kruskal bound documented for the underlying Boost algorithm path. |
prim_minimum_spanning_tree(root) | boost::prim_minimum_spanning_tree | O((n + m) log n) | O((n + m) log n) | yes | The ordered parent map adds O(n log n), which stays lower-order than Prim's dominant term. The table uses the standard Prim bound documented for the underlying Boost algorithm path. |
nxpp function | Boost counterpart | Boost complexity | nxpp complexity | Boost-order status | Note |
|---|---|---|---|---|---|
maximum_flow(source, sink, ...) | boost::edmonds_karp_max_flow | F_maxflow(n, m) | O(n + m + F_maxflow(n, m)) | yes | nxpp adds auxiliary-graph construction and result materialization around the delegated Boost max-flow core. |
minimum_cut(source, sink, ...) | max-flow + residual BFS | O(n + m + F_maxflow(n, m)) | O(n + m + F_maxflow(n, m)) | yes | Both sides include the residual BFS on top of the max-flow core. nxpp also materializes the cut partition and cut-edge list. |
max_flow_min_cost_cycle_canceling(...) | min-cost-flow pipeline based on boost::cycle_canceling | F_mincost_cc(n, m) | O(n + m + F_mincost_cc(n, m)) | yes | nxpp adds auxiliary-graph construction and result shaping. |
successive_shortest_path_nonnegative_weights(...) | boost::successive_shortest_path_nonnegative_weights | F_ssp(n, m) | O(n + m + F_ssp(n, m)) | yes | nxpp adds auxiliary-graph construction and result shaping. |
max_flow_min_cost(...) | current default min-cost-flow wrapper path | F_mincost_cc(n, m) | O(n + m + F_mincost_cc(n, m)) | yes | The current default still delegates to the cycle-canceling path, with linear auxiliary construction and result shaping on top. |
nxpp function | Boost counterpart | Boost complexity | nxpp complexity | Boost-order status | Note |
|---|---|---|---|---|---|
add_node(id) | boost::add_vertex | O(1) | O(log n) | wrapper-managed | The Boost-side figure is the default vecS insertion bound; nxpp maintains ordered NodeID -> descriptor translation and wrapper index metadata. |
has_node(id) | no standalone Boost primitive beyond wrapper-owned lookup | — | O(log n) | wrapper-managed | Pure lookup in the ordered translation map. |
add_edge(u, v, ...) | boost::add_edge | — | O(log n + deg(u)) | wrapper-managed | Raw BGL insertion cost depends on graph configuration. For multigraph insertion, the public-call bound drops to O(log n). nxpp adds endpoint translation and edge-id / attribute bookkeeping. |
remove_node(u) | boost::clear_vertex + boost::remove_vertex | — | O(n + m) | wrapper-managed | Raw BGL removal cost depends on graph configuration and erase path. nxpp also clears wrapper metadata, translation maps, and maintained indices. |
clear() | graph clear/reset primitives | — | O(n + m) | wrapper-managed | Raw BGL clear/reset cost is not treated here as a standalone contract. nxpp also resets attribute stores and tracked edge IDs. |
has_node_attr(u, key) | no direct Boost counterpart; attributes are wrapper-owned | — | O(log n + log A) | wrapper-managed | Looks up attributes in ordered stores external to the BGL graph. |
get_edge_attr(...) | no direct Boost counterpart; attributes are wrapper-owned | — | O(log n + deg(u) + log m + log A) | wrapper-managed | Includes edge resolution and ordered lookup in wrapper-owned attribute stores. The edge-id overload is O(log m + log A). |
When documenting or reviewing complexity for nxpp, prefer this style:
nxpp public-call complexityThat keeps the docs honest without hiding where nxpp intentionally trades extra work for a more convenient C++ API.