Rust's Typenum library internals Part - 1

Posted on Jan 10, 2022

This post's raison d'etre

I've started programming in Rust. And in order to contribute an open source library that makes heavy usage of type level programming using the Typenum library I've puzzled over how the typenum library works.

This post is a stab at detailing how I broke down my understanding of the essentials of the Typenum library and also hopes to a stab at be a tutorial at type-level programming in Rust

Warning: While the Typenum library also deals with runtime behaviour, in my regurgitation of the library's code I've only concentrated on the type-level code relevant to compile time behaviour ignoring runtime behaviour

While the code and the idea of type-level programming is beautiful, there is no advanced magic at play here. For a brief glimpse into the simplicity of the code you might want to clone the typenum library and run the commands

cd typenum; cargo build 

, and open up the macro generated content present in the file target/debug/build/type-*/out/consts.rs and you should see a list of generated definition of type level numbers


For each code block given below, I've verfied each one runs( note: the second last block won't run on it's own). Try running each one with cargo run after you've made a cargo project

High level view of what we're exploring in this post

The Typenum library deals with type integers and the operations on them. Both build on binary (bit) representations expressed again at the type level. In this post we're going to go sequentially through the following list

  1. Bit presentation and NOT and OR operator (all at the type level)
  2. Unsigned Integer bit representation and Bit addition ( all at the type level )
  3. Addition of Unsigned Integers ( at the type level )

Bit presentation and NOT and OR operator

             
// empty to trait to be able to refer to the 0 and 1 bit together
pub trait Bit:Default {}

// bit 0
#[derive(PartialEq,Debug, Default)]
pub struct B0;
impl Bit for B0 {}


// bit 1
#[derive(PartialEq,Debug, Default)]
pub struct B1;
impl Bit for B1 {}

             
             

Hopefully the above should be straightforward code. If not, here's a bit of explanation: I have two structs that are unified via a common trait called Bit that requires that the structs implementing the trait have defaults trait either implemented or somehow derived

Now comes the first portion of the main meat. The actual type level bit functions namely not and bit-or. If nothing makes sense, solely focus on the associated type of the implemented trait. That carries the result/output of the type level functions

We start off by overloading the Not operator from the standard library

             
use core::ops::{Not};

// bit trait
pub trait Bit:Default {}

// bit 0
#[derive(PartialEq,Debug, Default)]
pub struct B0;
impl Bit for B0 {}


// bit 1
#[derive(PartialEq,Debug, Default)]
pub struct B1;
impl Bit for B1 {}


impl Not for B0 {
    type Output = B1;
    fn not(self) -> Self :: Output {
        B1
    }
}

impl Not for B1 {
    type Output = B0;
    fn not(self) -> Self :: Output {
        B0
    }
}
             
             

Again, concentrate on the Output.

Let's start off with writing tests to see the type-level functions at play and then we'll go over questions like how the implemented traits map to standard run of the mill function structure and what even are the args here?

             
use core::ops::{Not};

// bit trait
pub trait Bit:Default {}

// bit 0
#[derive(PartialEq,Debug, Default)]
pub struct B0;
impl Bit for B0 {}


// bit 1
#[derive(PartialEq,Debug, Default)]
pub struct B1;
impl Bit for B1 {}


impl Not for B0 {
    type Output = B1;
    fn not(self) -> Self :: Output {
        B1
    }
}

impl Not for B1 {
    type Output = B0;
    fn not(self) -> Self :: Output {
        B0
    }
}

#[cfg(test)]
mod tests {
    use super::*;
    #[test]
    fn not_test() {
        let a: Option<B0> = None;
        let b: Option<<B1 as Not>>::Output> = None;
        assert_eq!(a, b);
     }
}
             
             

To make things clearer think of the traits here as functions and the structs as args to the function.

Let's look at the following line from the above code again

                 
let b: Option<<B1 as Not>>::Output> = None;
                 
             

It does work, inspite of the type mumbo-jumbo both a and b are equal even at the type level!!! Don't believe me? try changing B0 in a's definition to B1 and then run the code

The straight-forward intepretation of the above (type-level code)code is: The associated type of the Not trait implemented by the struct B1

The other interpretation perhaps becomes a bit clear if we introduce a type alias

                 
use core::ops::{Not};

// bit trait
pub trait Bit:Default {}

// bit 0
#[derive(PartialEq,Debug, Default)]
pub struct B0;
impl Bit for B0 {}


// bit 1
#[derive(PartialEq,Debug, Default)]
pub struct B1;
impl Bit for B1 {}


impl Not for B0 {
    type Output = B1;
    fn not(self) -> Self :: Output {
        B1
    }
}

impl Not for B1 {
    type Output = B0;
    fn not(self) -> Self :: Output {
        B0
    }
}

// looky here, BNot is a unary function and X the arg
type BNot<X> = <X as Not>::Output;

#[cfg(test)]
mod tests {
    use super::*;
    #[test]
    fn not_test() {
      let a: Option<B0> = None;
      let b: Option<BNot<B1>> = None;
      assert_eq!(a, b);
   }
}
             
             

Hopefully the type alias makes the mapping from associated-type-trait-struct to a function clear

Let's try overloading the BitOr standard definition for our Bit representation

                 
use core::ops::{BitOr};

// bit trait
pub trait Bit:Default {}

// bit 0
#[derive(PartialEq,Debug, Default)]
pub struct B0;
impl Bit for B0 {}


// bit 1
#[derive(PartialEq,Debug, Default)]
pub struct B1;
impl Bit for B1 {}


// technically could avoid the B0 within the triangular parens and the parens here
impl BitOr<B0> for B0 {
    type Output = B0;
    fn bitor(self, _:B0) -> Self :: Output {
        B0
    }
}

// technically could avoid the B0 within the triangular parens and the parens here
impl BitOr<B1> for B1 {
     type Output = B1;
     fn bitor(self, _:B1) -> Self :: Output {
        B1
     }
}


impl BitOr<B1>for B0 {
     type Output = B1;
     fn bitor(self, _:B1) -> Self :: Output {
        B1
  }
}


impl BitOr<B0>for B1 {
     type Output = B1;
     fn bitor(self, _:B0) -> Self :: Output {
        B1
  }
}


// looky here, BOr is a binary function, X and Y are the args
type BOr<X,Y> = <X as BitOr<Y>>::Output;

#[cfg(test)]
mod tests {
    use super::*;
    #[test]
    fn not_test() {
      let b: Option<B1> = None;
      let c: Option<BOr<B1,B0>> = None;
      let d: Option<BOr<B0,B1>> = None;
      assert_eq!(b, d);
      assert_eq!(c, d);
   }
}
                 
             

Like I mentioned before, type level functions though beautifull aren't magical.

For the BitOr function you have to lay each every instance of possible inputs and take the order of args into account. So for the 4 possible inputs for bitor, you have to type out(or programmatically generate) the code for it

Unsigned Integer bit representation and Bit addition

Now we move on to creating our (unsigned)integer bit representation and putting in functions to add 0 and 1 bit to these (unsigned)integer bit representations

	
use std::marker;

// way to collectively refer to bits 0 and 1
pub trait Bit:Default {}

// bit 0
#[derive(PartialEq,Debug, Default)]
pub struct B0;
impl Bit for B0 {}


// bit 1
#[derive(PartialEq,Debug, Default)]
pub struct B1;
impl Bit for B1 {}

// way to collectively refer to 0 and
pub trait Unsigned:Default {}

#[derive(PartialEq, Debug, Default)]
pub struct UTerm;
impl Unsigned for UTerm {}

// the marker bit is to get the compilet to shutup about the on-usage of 
// the declared generic type variables
#[derive(PartialEq, Debug, Default)]
pub struct UInt<U:Unsigned,B:Bit>{
 _marker: marker::PhantomData<(U,B)>
}
impl <U: Unsigned,B: Bit> Unsigned for UInt<U,B>{}

// T
type U0 = UTerm;
// T1
type U1 = UInt<UTerm,B1>;
// T10
type U2 = UInt<UInt<UTerm,B1>,B0>;
// T11
type U3 = UInt<UInt<UTerm,B1>,B1>;
// T100
type U4 = UInt<UInt<UInt<UTerm,B1>,B0>,B0>;
// T101
type U5 = UInt<UInt<UInt<UTerm,B1>,B0>,B1>;

	
	

There is a lot to unpack here

Like we did with the our 0 and 1 structs and the overarching trait, we do the same for our stab at binary representation of unsigned integers i.e two structs which in this case are zero (UTerm) and the general unsigned integer(UInt) represented as a recursive definition using the overarching trait (Unsigned), and the overarching trait referred to previously which is the Unsigned trait

If you missed the comment about the PhantomData part, it's there to keep the compiler quiet about us not giving a damn about the value aspect of the struct, we really really only care about the types used for this tutorial/piece on type programming

As for the recursive part, to help us get an idea listed are the first 5 type definitions of unsigned numbers which hopefully should give an insight as to how to form them . The two concepts important here to keep in mind, recursion definion and binary representation of unisgned numbers

Now let's try overloading the Add trait from the standard library for adding either the 0 or 1 bit (at the type level) to an type-level defined unsigned integer. Because type-level function definitions need to be explicit, let's list out what we need to jot down code wise to get this working

  1. Representation for adding B0 to UTerm
  2. Representation for adding B1 to UTerm
  3. Representation for adding B0 to UInt<U,B>
  4. Representation for adding B1 to UInt<U,B0>
  5. Representation for adding B1 to UInt<U,B1>

Let's fire away

	
use std::ops::{Add};
use std::marker;

// way to collectively refer to bits 0 and 1
pub trait Bit:Default {}

// bit 0
#[derive(PartialEq,Debug, Default)]
pub struct B0;
impl Bit for B0 {}


// bit 1
#[derive(PartialEq,Debug, Default)]
pub struct B1;
impl Bit for B1 {}

// way to collectively refer to 0 and
pub trait Unsigned:Default {}

#[derive(PartialEq, Debug, Default)]
pub struct UTerm;
impl Unsigned for UTerm {}

// the marker bit is to get the compilet to shutup about the on-usage of 
// the declared generic type variables
#[derive(PartialEq, Debug, Default)]
pub struct UInt<U:Unsigned,B:Bit>{
 _marker: marker::PhantomData<(U,B)>
}
impl <U: Unsigned,B: Bit> Unsigned for UInt<U,B>{}

// T
type U0 = UTerm;
// T1
type U1 = UInt<UTerm,B1>;
// T10
type U2 = UInt<UInt<UTerm,B1>,B0>;
// T11
type U3 = UInt<UInt<UTerm,B1>,B1>;
// T100
type U4 = UInt<UInt<UInt<UTerm,B1>,B0>,B0>;
// T101
type U5 = UInt<UInt<UInt<UTerm,B1>,B0>,B1>;

impl Add<B0> for UTerm {
  type Output = UTerm;
  fn add(self, _:B0) -> Self::Output {
    UTerm
  }
}

impl Add<B1> for UTerm {
  type Output = UInt<UTerm,B1>;
  fn add(self, _:B1) -> Self::Output {
    // don't bother this detail 
    UInt::default()
  }
}

impl <U:Unsigned, B:Bit> Add<B0> for UInt<U,B> {
   type Output = Self;
   fn add(self, _:B0) -> Self::Output {
     self
   }
}

impl <U:Unsigned> Add<B1> for UInt<U,B0> {
  type Output = UInt<U,B1>;
  fn add(self, _:B1) -> Self::Output {
  // don't bother with this detail
  UInt::default()
  }
}

// All the extra constraints on the types is to 
// reassure the compiler that the Output type matches the type
// we're  telling it it should output
impl <U:Unsigned> Add<B1> for UInt<U,B1> 
  where U: Add<B1>,
	<U as Add<B1>>::Output: Unsigned 
	{ 
	  type Output = UInt<<U as Add<B1>>::Output, B1>;
	  fn add(self, _:B1) -> Self::Output {
            // don't bother with this detail
            UInt::default()
	  }
}

	
	

Of the five items we needed to implement and we did so, the last one stands out a bit with all the extra type constraints. So why, why , why is that we need to provide all these new constraints? can't we do without?

The Rust compiler tries to help us with our types always lining up (in order to minimize run-time bugs) but isn't magically able to so. So when we say the output type will be of a certain type in the case the output of another trait, we better provide proof that the input type will somehow end up in the output type we've stated in the implementation. This proof happens to be the extra type constraints

You could argue, that the compiler should be smart enough to look outisde the scope of the implementation to see that there isn't anything that would not make it so that our input types will not eventually map to our output type. My answer is a guess at this point. This sort of smartness would be a pain to implement and the ensuring this scales without comprising the guarantee on correctness of the program might be impossible, and that local details for correctness and developer incovenience over global details for correctness will guarantee correctness over a much larger number of programs. Again this is a guess

So what are the extra type constraints saying here?

Remember we're trying to from <U:Unsigned> Add<B1> for UInt<U,B1> (our input) to UInt<<U as Add<B1>>::Output, B1>; (our output) Remember UInt requires the it's U input to have the Unsigned trait implemented so it is on us to reassure the compiler that after our "addition" we get something that has the Unsigned trait implemented

Back to what the extra type constraints say

  1. U in our input UInt will have the Unisgned trsit implemented
  2. U in out input Trait will have the addition to B1 trait implemented
  3. The output of the above addition to B1 will have the Unsigned trait implemented

Now with that out of the way, let's write some tests to see our type level functions at play. In addition we'll add two type aliases to make using the type functions easier

	
use std::ops::{Add};
use std::marker;

// way to collectively refer to bits 0 and 1
pub trait Bit:Default {}

// bit 0
#[derive(PartialEq,Debug, Default)]
pub struct B0;
impl Bit for B0 {}


// bit 1
#[derive(PartialEq,Debug, Default)]
pub struct B1;
impl Bit for B1 {}

// way to collectively refer to 0 and
pub trait Unsigned:Default {}

#[derive(PartialEq, Debug, Default)]
pub struct UTerm;
impl Unsigned for UTerm {}

// the marker bit is to get the compilet to shutup about the on-usage of 
// the declared generic type variables
#[derive(PartialEq, Debug, Default)]
pub struct UInt<U:Unsigned,B:Bit>{
 _marker: marker::PhantomData<(U,B)>
}
impl <U: Unsigned,B: Bit> Unsigned for UInt<U,B>{}

// T
type U0 = UTerm;
// T1
type U1 = UInt<UTerm,B1>;
// T10
type U2 = UInt<UInt<UTerm,B1>,B0>;
// T11
type U3 = UInt<UInt<UTerm,B1>,B1>;
// T100
type U4 = UInt<UInt<UInt<UTerm,B1>,B0>,B0>;
// T101
type U5 = UInt<UInt<UInt<UTerm,B1>,B0>,B1>;

impl Add<B0> for UTerm {
  type Output = UTerm;
  fn add(self, _:B0) -> Self::Output {
    UTerm
  }
}

impl Add<B1> for UTerm {
  type Output = UInt<UTerm,B1>;
  fn add(self, _:B1) -> Self::Output {
    // don't bother this detail 
    UInt::default()
  }
}

impl <U:Unsigned, B:Bit> Add<B0> for UInt<U,B> {
   type Output = Self;
   fn add(self, _:B0) -> Self::Output {
     self
   }
}

impl <U:Unsigned> Add<B1> for UInt<U,B0> {
  type Output = UInt<U,B1>;
  fn add(self, _:B1) -> Self::Output {
  // don't bother with this detail
  UInt::default()
  }
}

// All the extra constraints on the types is to 
// reassure the compiler that the Output type matches the type
// we're  telling it it should output
impl <U:Unsigned> Add<B1> for UInt<U,B1> 
  where U: Add<B1>,
	<U as Add<B1>>::Output: Unsigned 
	{ 
	  type Output = UInt<<U as Add<B1>>::Output, B1>;
	  fn add(self, _:B1) -> Self::Output {
            // don't bother with this detail
            UInt::default()
	  }
}

type Add0<X> = <X as Add<B0>>::Output;
type Add1<X> = <X as Add<B1>>::Output;

#[cfg(test)]
mod tests {
    use super::*;
    #[test]
    fn addb_test() {
      let a: Option<U0> = None;
      let b: Option<Add0<U0>> = None;
      assert_eq!(a,b);

      let c: Option<U1> = None;
      let d: Option<Add1<U0>> = None;

      let e: Option<U5> = None;
      let f: Option<Add1<U4>> = None;
      assert_eq!(e,f);
   }
}

	
	

Hopefully the tests make sense

now moving on to the last bit of this write-up

Addition of Unsigned Integers at the type level

Let's see what needs to be implmented to get this going

  1. Representation for adding UTerm to UTerm
  2. Representation for adding UInt<U,B> to UTerm
  3. Representation for adding UTerm to UInt<U,B>
  4. Representation for adding UInt<U1,B0> to UInt<U2,B0> (here UI and U2 represent two different Unsigned integers) - A
  5. Representation for adding UInt<U1,B0> to UInt<U2,B1> (here UI and U2 represent two different Unsigned integers) - B
  6. Representation for adding UInt<U1,B1> to UInt<U2,B0> (here UI and U2 represent two different Unsigned integers) - C
  7. Representation for adding UInt<U1,B1> to UInt<U2,B1>

Let's get into it

	
use std::ops::{Add};
use std::marker;

// way to collectively refer to bits 0 and 1
pub trait Bit:Default {}

// bit 0
#[derive(PartialEq,Debug, Default)]
pub struct B0;
impl Bit for B0 {}


// bit 1
#[derive(PartialEq,Debug, Default)]
pub struct B1;
impl Bit for B1 {}

// way to collectively refer to 0 and
pub trait Unsigned:Default {}

#[derive(PartialEq, Debug, Default)]
pub struct UTerm;
impl Unsigned for UTerm {}

// the marker bit is to get the compilet to shutup about the on-usage of 
// the declared generic type variables
#[derive(PartialEq, Debug, Default)]
pub struct UInt<U:Unsigned,B:Bit>{
 _marker: marker::PhantomData<(U,B)>
}
impl <U: Unsigned,B: Bit> Unsigned for UInt<U,B>{}

// T
type U0 = UTerm;
// T1
type U1 = UInt<UTerm,B1>;
// T10
type U2 = UInt<UInt<UTerm,B1>,B0>;
// T11
type U3 = UInt<UInt<UTerm,B1>,B1>;
// T100
type U4 = UInt<UInt<UInt<UTerm,B1>,B0>,B0>;
// T101
type U5 = UInt<UInt<UInt<UTerm,B1>,B0>,B1>;

impl Add<B0> for UTerm {
  type Output = UTerm;
  fn add(self, _:B0) -> Self::Output {
    UTerm
  }
}

impl Add<B1> for UTerm {
  type Output = UInt<UTerm,B1>;
  fn add(self, _:B1) -> Self::Output {
    // don't bother this detail 
    UInt::default()
  }
}

impl <U:Unsigned, B:Bit> Add<B0> for UInt<U,B> {
   type Output = Self;
   fn add(self, _:B0) -> Self::Output {
     self
   }
}

impl <U:Unsigned> Add<B1> for UInt<U,B0> {
  type Output = UInt<U,B1>;
  fn add(self, _:B1) -> Self::Output {
  // don't bother with this detail
  UInt::default()
  }
}

// All the extra constraints on the types is to 
// reassure the compiler that the Output type matches the type
// we're  telling it it should output
impl <U:Unsigned> Add<B1> for UInt<U,B1> 
  where U: Add<B1>,
	<U as Add<B1>>::Output: Unsigned 
	{ 
	  type Output = UInt<<U as Add<B1>>::Output, B1>;
	  fn add(self, _:B1) -> Self::Output {
            // don't bother with this detail
            UInt::default()
	  }
}

type Add0<X> = <X as Add<B0>>::Output;
type Add1<X> = <X as Add<B1>>::Output;

//============== New Code starts here  ==================================

// could technically skip the triangular parens here and it's content i.E UTerm
impl Add<UTerm> for UTerm {
    type Output = UTerm;
    fn add(self, _:UTerm) -> Self::Output {
      UTerm
    }
}

impl <U:Unsigned, B:Bit> Add<UInt<U,B>> for UTerm {
   type Output = UInt<U,B>;
   fn add(self, x:Self::Output) -> Self::Output {
     x
   }
}


impl <U:Unsigned, B:Bit> Add<UTerm> for UInt<U,B> {
   type Output = UInt<U,B>;
   fn add(self, _:UTerm) -> Self::Output {
     self
   }
}

// type aliasing to make things easier for us from here
type BAdd<X,Y> = <Y as Add<X>>::Output;


// cases (A) and (B)
impl <U1:Unsigned, U2:Unsigned, B:Bit> Add<UInt<U1,B0>> for UInt<U2,B> 
  where U2:Add<U1>,
        BAdd<U1,U2>: Unsigned
{
  type Output = UInt<BAdd<U1,U2>,B>;
  fn add(self, _: UInt<U1,B0>) -> Self::Output {
   // don't bother about this detail 
   UInt::default() 
  }
} 


// case (C) tackled here
impl <U1:Unsigned, U2:Unsigned> Add<UInt<U1,B1>> for UInt<U2,B0> 
  where U2:Add<U1>,
        BAdd<U1,U2>: Unsigned
{
  type Output = UInt<BAdd<U1,U2>,B1>;
  fn add(self, _: UInt<U1,B1>) -> Self::Output {
   // don't bother about this detail 
   UInt::default() 
  }
}


//WARNING: we've made a mistake in the type constraints here, can you spot the mistake?
impl <U1:Unsigned, U2:Unsigned> Add<UInt<U1,B1>> for UInt<U2,B1> 
  where U2:Add<U1>,
        BAdd<U1,U2>: Unsigned
{
  type Output = UInt<Add1<BAdd<U1,U2>>,B0>;
  fn add(self, _: UInt<U1,B1>) -> Self::Output {
   // don't bother about this detail 
   UInt::default() 
  }
}



#[cfg(test)]
mod tests {
    use super::*;
    #[test]
    fn addb_test() {
      let a: Option<U0> = None;
      let b: Option<Add0<U0>> = None;
      assert_eq!(a,b);

      let c: Option<U1> = None;
      let d: Option<Add1<U0>> = None;

      let e: Option<U5> = None;
      let f: Option<Add1<U4>> = None;
      assert_eq!(e,f);
   }
}

	
	

If you try running the above code, the last Add implementation will throw out errors because we did not get the type constraints right, let's understand why

We have to guarantee that Add1<BAdd<U1,U2>> in UInt<Add1<BAdd<U1,U2>>,B0> is an Unsigned type

BUT our constraints fall short, our current type constraints i.e U2:Add<U1>, BAdd<U1,U2>: Unsigned stop short at saying BAdd's output will implement the Unsigned trait and not the output of Add1<BAdd<U1,U2>>

We need to modify our last constraint and add in a new one as so

	
impl <U1:Unsigned, U2:Unsigned> Add<UInt<U1,B1>> for UInt<U2,B1> 
  where U2:Add<U1>,
	BAdd<U1,U2>: Add<B1>,
	Add1<BAdd<U1,U2>>:Unsigned
{
  type Output = UInt<Add1<BAdd<U1,U2>>,B0>;
  fn add(self, _: UInt<U1,B1>) -> Self::Output {
   // don't bother about this detail 
   UInt::default() 
  }
}
	
	

And new we add tests to finish off

	
use std::ops::{Add};
use std::marker;

// way to collectively refer to bits 0 and 1
pub trait Bit:Default {}

// bit 0
#[derive(PartialEq,Debug, Default)]
pub struct B0;
impl Bit for B0 {}


// bit 1
#[derive(PartialEq,Debug, Default)]
pub struct B1;
impl Bit for B1 {}

// way to collectively refer to 0 and
pub trait Unsigned:Default {}

#[derive(PartialEq, Debug, Default)]
pub struct UTerm;
impl Unsigned for UTerm {}

// the marker bit is to get the compilet to shutup about the on-usage of 
// the declared generic type variables
#[derive(PartialEq, Debug, Default)]
pub struct UInt<U:Unsigned,B:Bit>{
 _marker: marker::PhantomData<(U,B)>
}
impl <U: Unsigned,B: Bit> Unsigned for UInt<U,B>{}

// T
type U0 = UTerm;
// T1
type U1 = UInt<UTerm,B1>;
// T10
type U2 = UInt<UInt<UTerm,B1>,B0>;
// T11
type U3 = UInt<UInt<UTerm,B1>,B1>;
// T100
type U4 = UInt<UInt<UInt<UTerm,B1>,B0>,B0>;
// T101
type U5 = UInt<UInt<UInt<UTerm,B1>,B0>,B1>;

impl Add<B0> for UTerm {
  type Output = UTerm;
  fn add(self, _:B0) -> Self::Output {
    UTerm
  }
}

impl Add<B1> for UTerm {
  type Output = UInt<UTerm,B1>;
  fn add(self, _:B1) -> Self::Output {
    // don't bother this detail 
    UInt::default()
  }
}

impl <U:Unsigned, B:Bit> Add<B0> for UInt<U,B> {
   type Output = Self;
   fn add(self, _:B0) -> Self::Output {
     self
   }
}

impl <U:Unsigned> Add<B1> for UInt<U,B0> {
  type Output = UInt<U,B1>;
  fn add(self, _:B1) -> Self::Output {
  // don't bother with this detail
  UInt::default()
  }
}

// All the extra constraints on the types is to 
// reassure the compiler that the Output type matches the type
// we're  telling it it should output
impl <U:Unsigned> Add<B1> for UInt<U,B1> 
  where U: Add<B1>,
	<U as Add<B1>>::Output: Unsigned 
	{ 
	  type Output = UInt<<U as Add<B1>>::Output, B1>;
	  fn add(self, _:B1) -> Self::Output {
            // don't bother with this detail
            UInt::default()
	  }
}

type Add0<X> = <X as Add<B0>>::Output;
type Add1<X> = <X as Add<B1>>::Output;

//============== New Code starts here  ==================================

// could technically skip the triangular parens here and it's content i.E UTerm
impl Add<UTerm> for UTerm {
    type Output = UTerm;
    fn add(self, _:UTerm) -> Self::Output {
      UTerm
    }
}

impl <U:Unsigned, B:Bit> Add<UInt<U,B>> for UTerm {
   type Output = UInt<U,B>;
   fn add(self, x:Self::Output) -> Self::Output {
     x
   }
}


impl <U:Unsigned, B:Bit> Add<UTerm> for UInt<U,B> {
   type Output = UInt<U,B>;
   fn add(self, _:UTerm) -> Self::Output {
     self
   }
}

// type aliasing to make things easier for us from here
type BAdd<X,Y> = <Y as Add<X>>::Output;


// cases (A) and (B)
impl <U1:Unsigned, U2:Unsigned, B:Bit> Add<UInt<U1,B0>> for UInt<U2,B> 
  where U2:Add<U1>,
        BAdd<U1,U2>: Unsigned
{
  type Output = UInt<BAdd<U1,U2>,B>;
  fn add(self, _: UInt<U1,B0>) -> Self::Output {
   // don't bother about this detail 
   UInt::default() 
  }
} 


// case (C) tackled here
impl <U1:Unsigned, U2:Unsigned> Add<UInt<U1,B1>> for UInt<U2,B0> 
  where U2:Add<U1>,
        BAdd<U1,U2>: Unsigned
{
  type Output = UInt<BAdd<U1,U2>,B1>;
  fn add(self, _: UInt<U1,B1>) -> Self::Output {
   // don't bother about this detail 
   UInt::default() 
  }
}


impl <U1:Unsigned, U2:Unsigned> Add<UInt<U1,B1>> for UInt<U2,B1> 
  where U2:Add<U1>,
	BAdd<U1,U2>: Add<B1>,
	Add1<BAdd<U1,U2>>:Unsigned
{
  type Output = UInt<Add1<BAdd<U1,U2>>,B0>;
  fn add(self, _: UInt<U1,B1>) -> Self::Output {
   // don't bother about this detail 
   UInt::default() 
  }
}


#[cfg(test)]
mod tests {
    use super::*;
    #[test]
    fn addb_test() {
      let a: Option<U2> = None;
      let b: Option<BAdd<U2,U0>> = None;
      let c: Option<BAdd<U0,U2>> = None;
      assert_eq!(a,b);
      assert_eq!(a,c);

      let d: Option<U4> = None;
      let e: Option<BAdd<U2,U2>> = None;
      assert_eq!(d,e);

      let g: Option<U5> = None;
      let h: Option<BAdd<U3,U2>> = None;
      let i: Option<BAdd<U2,U3>> = None;
      assert_eq!(g,h);
      assert_eq!(g,i);
   }
}

	
	

I hope that all of this made some sense and that you the reader got something out of it. I do hope to make a part 2 and part 3 for this series, so hope you read those as well. Thank you