Crate heapless

source ·
Expand description

static friendly data structures that don’t require dynamic memory allocation

The core principle behind heapless is that its data structures are backed by a static memory allocation. For example, you can think of heapless::Vec as an alternative version of std::Vec with fixed capacity and that can’t be re-allocated on the fly (e.g. via push).

All heapless data structures store their memory allocation inline and specify their capacity via their type parameter N. This means that you can instantiate a heapless data structure on the stack, in a static variable, or even in the heap.

use heapless::Vec; // fixed capacity `std::Vec`

// on the stack
let mut xs: Vec<u8, 8> = Vec::new(); // can hold up to 8 elements
xs.push(42).unwrap();
assert_eq!(xs.pop(), Some(42));

// in a `static` variable
static mut XS: Vec<u8, 8> = Vec::new();

let xs = unsafe { &mut XS };

xs.push(42);
assert_eq!(xs.pop(), Some(42));

// in the heap (though kind of pointless because no reallocation)
let mut ys: Box<Vec<u8, 8>> = Box::new(Vec::new());
ys.push(42).unwrap();
assert_eq!(ys.pop(), Some(42));

Because they have fixed capacity heapless data structures don’t implicitly reallocate. This means that operations like heapless::Vec.push are truly constant time rather than amortized constant time with potentially unbounded (depends on the allocator) worst case execution time (which is bad / unacceptable for hard real time applications).

heapless data structures don’t use a memory allocator which means no risk of an uncatchable Out Of Memory (OOM) condition while performing operations on them. It’s certainly possible to run out of capacity while growing heapless data structures, but the API lets you handle this possibility by returning a Result on operations that may exhaust the capacity of the data structure.

List of currently implemented data structures:

  • Arc – like std::sync::Arc but backed by a lock-free memory pool rather than #[global_allocator]
  • Box – like std::boxed::Box but backed by a lock-free memory pool rather than #[global_allocator]
  • BinaryHeap – priority queue
  • IndexMap – hash table
  • IndexSet – hash set
  • LinearMap
  • Object – objects managed by an object pool
  • String
  • Vec
  • mpmc::Q* – multiple producer multiple consumer lock-free queue
  • spsc::Queue – single producer single consumer lock-free queue

§Optional Features

The heapless crate provides the following optional Cargo features:

§Minimum Supported Rust Version (MSRV)

This crate does not have a Minimum Supported Rust Version (MSRV) and may make use of language features and API in the standard library available in the latest stable Rust version.

In other words, changes in the Rust version requirement of this crate are not considered semver breaking change and may occur in patch version releases.

Re-exports§

  • pub use binary_heap::BinaryHeap;
  • pub use indexmap::Bucket;
  • pub use indexmap::Pos;

Modules§

  • A priority queue implemented with a binary heap.
  • A fixed capacity Multiple-Producer Multiple-Consumer (MPMC) lock-free queue
  • Memory and object pools
  • A fixed sorted priority linked list, similar to BinaryHeap but with different properties on push, pop, etc. For example, the sorting of the list will never memcpy the underlying value, so having large objects in the list will not cause a performance hit.
  • Fixed capacity Single Producer Single Consumer (SPSC) queue

Macros§

  • Creates a new ArcPool singleton with the given $name that manages the specified $data_type
  • Creates a new BoxPool singleton with the given $name that manages the specified $data_type
  • Creates a new ObjectPool singleton with the given $name that manages the specified $data_type

Structs§

Enums§

  • A view into an entry in the map

Type Aliases§