1. Rust Fundamentals Beginner
What makes Rust different from C/C++?
Rust guarantees memory safety and thread safety at compile time without a garbage collector. The borrow checker enforces ownership rules, preventing dangling pointers, data races, and use-after-free bugs. C/C++ rely on programmer discipline (and tools like Valgrind/AddressSanitizer) to catch these at runtime.
Explain the difference between
String and &str.String is a heap-allocated, owned, growable UTF-8 string. &str is a borrowed, immutable reference to a string slice — it can point to data on the heap (a String), in the binary (string literals), or on the stack. Think of String as Vec<u8> with UTF-8 guarantees and &str as &[u8] with UTF-8 guarantees.fn greet(name: &str) { // accepts both String and &str
println!("Hello, {name}");
}
fn main() {
let owned: String = String::from("Alice"); // heap-allocated
let borrowed: &str = "Bob"; // string literal (static)
greet(&owned); // auto-deref: &String → &str
greet(borrowed); // already &str
}
What is the difference between
Vec<T>, &[T], and [T; N]?[T; N] is a fixed-size array on the stack. Vec<T> is a growable, heap-allocated dynamic array. &[T] is a slice — a borrowed view into contiguous memory (either an array or Vec). Functions should generally accept &[T] for maximum flexibility.What are the scalar types in Rust?
Integers (
i8 through i128, u8 through u128, isize/usize), floating-point (f32, f64), boolean (bool), and character (char — 4 bytes, represents a Unicode scalar value).What is shadowing and how does it differ from mutability?
Shadowing creates a new variable with the same name, potentially with a different type.
let mut allows changing the value of an existing variable but not its type. Shadowing is useful for transforming a value through stages.// Shadowing — creates a new binding (can change type)
let x = "42";
let x: i32 = x.parse().unwrap(); // x is now i32
// Mutability — same binding, same type
let mut y = 5;
y = 10; // OK: same type
// y = "hello"; // ERROR: expected i32, found &str
2. Ownership & Borrowing Core
Key Topic
Ownership is THE defining feature of Rust. This topic comes up constantly in practice and assessments. Be ready to explain the rules, draw memory diagrams, and predict compiler errors.
The Three Ownership Rules
- Each value in Rust has exactly one owner
- When the owner goes out of scope, the value is dropped
- There can be either one mutable reference OR any number of immutable references — never both
Ownership Transfer (Move Semantics):
let s1 = String::from("hello");
let s2 = s1; ← s1 is MOVED to s2
s1 is now INVALID
Stack: Heap:
┌──────────┐ ┌──────────────┐
│ s1 (inv) │ ╳───▶│ "hello" │
│ s2 ──────│────────▶│ len: 5 │
│ │ │ cap: 5 │
└──────────┘ └──────────────┘
After move, only s2 owns the heap data.
Accessing s1 → compile error!
What is the difference between
Copy and Clone?Copy is an implicit, bitwise copy that happens automatically on assignment (only for types stored entirely on the stack — integers, bools, floats, tuples of Copy types). Clone is an explicit, potentially expensive deep copy invoked with .clone(). If a type implements Copy, it also implements Clone, but not vice versa. String implements Clone but NOT Copy because it has heap data.// Copy types — assignment copies, no move
let a = 42;
let b = a; // a is still valid (i32 is Copy)
println!("{a}"); // OK!
// Non-Copy types — assignment moves
let s1 = String::from("hello");
let s2 = s1; // s1 is MOVED
// println!("{s1}"); // ERROR: borrow of moved value
// Use .clone() for explicit deep copy
let s3 = String::from("world");
let s4 = s3.clone(); // deep copy — both valid
println!("{s3} {s4}"); // OK!
Borrowing Rules
Explain the borrowing rules. Why can't you have a mutable and immutable reference simultaneously?
Rust prevents data races at compile time. A data race occurs when: (1) two or more pointers access the same data, (2) at least one writes, and (3) there's no synchronization. By forbidding simultaneous mutable and immutable references, Rust guarantees no reader sees partially written data.
let mut v = vec![1, 2, 3];
let first = &v[0]; // immutable borrow
// v.push(4); // ERROR: mutable borrow while immutable exists
println!("{first}"); // immutable borrow ends here (NLL)
v.push(4); // OK now — no active immutable borrows
// Non-Lexical Lifetimes (NLL): borrows end at last USE, not scope end
What is "Non-Lexical Lifetimes" (NLL)?
Before Rust 2018, borrows lasted until the end of the enclosing scope. NLL (enabled by default since 2018 edition) makes borrows end at their last point of use, not the end of the block. This makes the borrow checker more permissive and eliminates many false-positive errors.
Common Pitfall: Returning References
// ❌ ERROR: dangling reference
fn dangling() -> &str {
let s = String::from("hello");
&s // s is dropped here → reference would dangle!
}
// ✅ Return owned value instead
fn not_dangling() -> String {
String::from("hello") // ownership transferred to caller
}
// ✅ Or return a &'static str (lives forever)
fn static_str() -> &'static str {
"hello" // string literal has 'static lifetime
}
3. Lifetimes Advanced
What are lifetimes and why does Rust need them?
Lifetimes are annotations (e.g.,
'a) that tell the compiler how long references are valid. They prevent dangling references by ensuring a reference never outlives the data it points to. Most of the time, lifetimes are inferred by the compiler (elision rules), but sometimes you must annotate them explicitly — especially in function signatures with multiple references.// The compiler can't infer which input's lifetime to assign to output
// We annotate: return value lives as long as BOTH inputs
fn longest<'a>(x: &'a str, y: &'a str) -> &'a str {
if x.len() > y.len() { x } else { y }
}
fn main() {
let s1 = String::from("long string");
let result;
{
let s2 = String::from("xyz");
result = longest(&s1, &s2);
println!("{result}"); // OK: both s1 and s2 alive
}
// println!("{result}"); // ERROR: s2 dropped, result might dangle
}
Lifetime Elision Rules
The compiler applies three rules to infer lifetimes automatically:
- Each input reference gets its own lifetime:
fn foo(x: &str, y: &str)becomesfn foo<'a, 'b>(x: &'a str, y: &'b str) - If there's exactly one input lifetime, it's assigned to all outputs:
fn foo(x: &str) -> &strbecomesfn foo<'a>(x: &'a str) -> &'a str - If one input is
&selfor&mut self, its lifetime is assigned to all outputs
Lifetimes in Structs
// Struct that holds a reference — MUST have lifetime annotation
struct Excerpt<'a> {
part: &'a str, // this reference must outlive the struct
}
impl<'a> Excerpt<'a> {
// Elision rule 3: &self's lifetime assigned to output
fn level(&self) -> i32 { 3 }
// Return type borrows from self, not announcement
fn announce(&self, announcement: &str) -> &str {
println!("Attention: {announcement}");
self.part // returns reference with lifetime 'a
}
}
// 'static lifetime — lives for entire program duration
let s: &'static str = "I live forever";
What is
'static and when should you use it?'static means the reference is valid for the entire program duration. String literals are 'static. You also see T: 'static as a trait bound, which means T either contains no references or all references are 'static. It does NOT mean the value lives forever — it means the value can live forever if needed. Use sparingly; prefer shorter lifetimes.4. Type System & Traits Core
What is the difference between
impl Trait and dyn Trait?impl Trait uses static dispatch (monomorphization) — the compiler generates specialized code for each concrete type. Zero runtime cost but increases binary size. dyn Trait uses dynamic dispatch via a vtable — one version of the code handles all types. Small runtime overhead but smaller binaries and enables heterogeneous collections.// Static dispatch — monomorphized at compile time
fn print_area(shape: impl Shape) {
println!("Area: {}", shape.area());
}
// Dynamic dispatch — vtable lookup at runtime
fn print_area_dyn(shape: &dyn Shape) {
println!("Area: {}", shape.area());
}
// Heterogeneous collection (only possible with dyn)
let shapes: Vec<Box<dyn Shape>> = vec![
Box::new(Circle { radius: 5.0 }),
Box::new(Rectangle { w: 3.0, h: 4.0 }),
];
Key Traits Every Rustacean Must Know
| Trait | Purpose | Example Method |
|---|---|---|
Clone | Explicit deep copy | .clone() |
Copy | Implicit bitwise copy (stack-only) | Automatic on assignment |
Drop | Custom cleanup when value goes out of scope | fn drop(&mut self) |
Display | User-facing string formatting | fmt::Display |
Debug | Developer-facing debug formatting | {:?} |
From / Into | Type conversion | From::from(val) |
Deref / DerefMut | Smart pointer dereferencing | *val |
Iterator | Lazy sequential access | .next() |
Send | Safe to transfer between threads | Auto-implemented |
Sync | Safe to share references between threads | Auto-implemented |
Sized | Known size at compile time (default bound) | Implicit |
Fn / FnMut / FnOnce | Callable types (closures) | closure(args) |
Trait Objects: Object Safety Rules
A trait is object-safe (can be used as dyn Trait) only if:
- All methods have a receiver (
&self,&mut self, orself) - No method returns
Self - No method has generic type parameters
- The trait does not require
Self: Sized
// ✅ Object-safe
trait Draw {
fn draw(&self);
}
// ❌ NOT object-safe (returns Self)
trait Clonable {
fn clone_self(&self) -> Self; // can't know Self's size at runtime
}
// ❌ NOT object-safe (generic method)
trait Serializer {
fn serialize<T: serde::Serialize>(&self, val: &T);
}
5. Enums & Pattern Matching Core
What makes Rust enums more powerful than C/Java enums?
Rust enums are algebraic data types (sum types). Each variant can hold different types and amounts of data. Combined with exhaustive pattern matching, they replace null pointers, exception handling, and union types safely.
// Enum with data in each variant
enum Message {
Quit, // no data
Move { x: i32, y: i32 }, // named fields (struct-like)
Write(String), // single value (tuple-like)
Color(u8, u8, u8), // multiple values
}
fn handle(msg: Message) {
match msg {
Message::Quit => println!("Quit"),
Message::Move { x, y } => println!("Move to ({x}, {y})"),
Message::Write(text) => println!("Text: {text}"),
Message::Color(r, g, b) => println!("#{r:02x}{g:02x}{b:02x}"),
} // ← exhaustive: every variant MUST be handled
}
// Option<T> — Rust's null replacement
fn find_user(id: u64) -> Option<User> {
if id == 1 { Some(user) } else { None }
}
// if let — concise pattern matching for single variant
if let Some(user) = find_user(1) {
println!("Found: {}", user.name);
}
// let-else (Rust 1.65+) — bind or diverge
let Some(user) = find_user(id) else {
return Err("User not found".into());
};
6. Error Handling Core
Explain Rust's error handling philosophy. What is
Result<T, E> vs panic!?Rust has two error categories: recoverable (
Result<T, E>) and unrecoverable (panic!). Use Result for expected failures (file not found, network error). Use panic! for bugs (index out of bounds, invariant violations). The ? operator propagates errors ergonomically.use std::fs;
use std::io;
// The ? operator: propagate errors concisely
fn read_config() -> Result<String, io::Error> {
let content = fs::read_to_string("config.toml")?; // returns Err early if fails
Ok(content)
}
// Custom error type with thiserror
use thiserror::Error;
#[derive(Error, Debug)]
enum AppError {
#[error("IO error: {0}")]
Io(#[from] io::Error), // auto From<io::Error>
#[error("Parse error: {0}")]
Parse(#[from] serde_json::Error),
#[error("Not found: {0}")]
NotFound(String),
}
// With anyhow for application code (not libraries)
use anyhow::{Context, Result};
fn load_config() -> Result<Config> {
let text = fs::read_to_string("config.toml")
.context("Failed to read config file")?;
let config: Config = toml::from_str(&text)
.context("Failed to parse config")?;
Ok(config)
}
When to Use Which
| Crate | Use For | Key Feature |
|---|---|---|
thiserror | Library code | Derive macro for custom error types with From impls |
anyhow | Application code | Ergonomic Result<T> with context chaining |
eyre | Application code | Like anyhow but customizable reporters (e.g., color-eyre) |
miette | CLI tools | Beautiful diagnostic reports with source spans |
7. Smart Pointers Advanced
| Type | Ownership | Thread-Safe | Mutability | Use Case |
|---|---|---|---|---|
Box<T> | Single owner | Send + Sync (if T is) | Inherited | Heap allocation, recursive types, trait objects |
Rc<T> | Multiple owners | ❌ Single-threaded only | Immutable | Shared ownership in single thread |
Arc<T> | Multiple owners | ✅ Atomic refcount | Immutable | Shared ownership across threads |
Cell<T> | Single owner | ❌ | Interior mutability (Copy) | Mutate through shared reference (Copy types) |
RefCell<T> | Single owner | ❌ | Interior mutability (runtime) | Borrow-check at runtime instead of compile time |
Mutex<T> | Single owner (wrap in Arc) | ✅ | Interior mutability (locking) | Thread-safe mutable access |
RwLock<T> | Single owner (wrap in Arc) | ✅ | Multiple readers OR one writer | Read-heavy concurrent access |
Cow<'a, T> | Clone-on-write | Depends on T | Clones only when mutation needed | Avoid unnecessary cloning |
use std::rc::Rc;
use std::cell::RefCell;
// Rc + RefCell = multiple owners with interior mutability (single-thread)
let shared = Rc::new(RefCell::new(vec![1, 2, 3]));
let clone1 = Rc::clone(&shared);
let clone2 = Rc::clone(&shared);
clone1.borrow_mut().push(4); // runtime borrow check
println!("{:?}", shared.borrow()); // [1, 2, 3, 4]
// Arc + Mutex = multiple owners with interior mutability (multi-thread)
use std::sync::{Arc, Mutex};
let counter = Arc::new(Mutex::new(0));
let handles: Vec<_> = (0..10).map(|_| {
let counter = Arc::clone(&counter);
std::thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num += 1;
})
}).collect();
for h in handles { h.join().unwrap(); }
println!("Result: {}", *counter.lock().unwrap()); // 10
8. Concurrency Advanced
Explain
Send and Sync traits.Send means a type can be safely transferred to another thread (ownership moves). Sync means a type can be safely shared between threads via references (&T is Send if T is Sync). Most types are both. Notable exceptions: Rc<T> is neither (not atomic). RefCell<T> is Send but not Sync. Raw pointers are neither.use std::sync::mpsc;
use std::thread;
// Message passing with channels
let (tx, rx) = mpsc::channel();
let producer = thread::spawn(move || {
for i in 0..5 {
tx.send(format!("msg {i}")).unwrap();
}
});
// Receive all messages
for msg in rx {
println!("Received: {msg}");
}
// Rayon — data parallelism made easy
use rayon::prelude::*;
let sum: i64 = (0..1_000_000)
.into_par_iter() // parallel iterator
.map(|x| x * x)
.sum();
// Scoped threads (Rust 1.63+) — borrow from parent stack
let mut data = vec![1, 2, 3];
std::thread::scope(|s| {
s.spawn(|| {
println!("Read: {:?}", &data); // borrow data — no Arc needed!
});
s.spawn(|| {
data.push(4); // mutable access (only one writer)
});
});
9. Async / Await Advanced
How does async/await work in Rust? What is a
Future?An
async fn returns a Future — a state machine that represents a value that will be available later. Futures are lazy: they do nothing until polled by an executor (like Tokio). .await suspends the current task, yielding control to the executor until the future resolves. Unlike JavaScript, Rust futures are zero-cost abstractions compiled to state machines.
Async State Machine:
async fn fetch_data() -> String {
let resp = client.get(url).await; // → State 1: waiting for HTTP
let body = resp.text().await; // → State 2: waiting for body
body // → Complete: return value
}
Compiles to approximately:
enum FetchDataFuture {
State0 { client, url }, // Initial
State1 { resp_future }, // Awaiting HTTP response
State2 { body_future }, // Awaiting body text
Complete(String), // Done
}
impl Future for FetchDataFuture {
fn poll(self, cx: &mut Context) -> Poll<String> {
match self.state {
State0 => { /* start request, transition to State1 */ }
State1 => { /* poll resp, if ready → State2 */ }
State2 => { /* poll body, if ready → Complete */ }
}
}
}
use tokio;
// Basic async function
async fn fetch_url(url: &str) -> Result<String, reqwest::Error> {
let resp = reqwest::get(url).await?;
let body = resp.text().await?;
Ok(body)
}
// Concurrent execution with join!
async fn fetch_all() -> (String, String) {
let (a, b) = tokio::join!(
fetch_url("https://api.example.com/a"),
fetch_url("https://api.example.com/b"),
);
(a.unwrap(), b.unwrap())
}
// select! — race multiple futures
tokio::select! {
val = future_a => println!("A finished: {val:?}"),
val = future_b => println!("B finished: {val:?}"),
_ = tokio::time::sleep(Duration::from_secs(5)) => {
println!("Timeout!");
}
}
What is
Pin and why does async Rust need it?Pin<P> is a wrapper that prevents a value from being moved in memory. Async futures are often self-referential — they contain pointers to data within their own struct. If the struct moves, those pointers become invalid. Pin guarantees the data stays at its memory address, making it safe for the runtime to poll the future.Async Runtimes Comparison
| Runtime | Best For | Threads | Key Features |
|---|---|---|---|
tokio | Network services, general-purpose | Multi-threaded | Mature, largest ecosystem, I/O + timers + channels |
async-std | std-like async API | Multi-threaded | Mirrors std API, simpler for beginners |
smol | Minimal, embedded | Configurable | Tiny footprint, composable |
10. Closures & Iterators Core
What are
Fn, FnMut, and FnOnce?These traits define how a closure captures its environment:
Every closure implements
FnOnce — takes ownership of captured values (can be called once)
FnMut — mutably borrows captured values (can be called multiple times)
Fn — immutably borrows captured values (most restrictive, most reusable)
Every closure implements
FnOnce. If it doesn't move, it also implements FnMut. If it doesn't mutate, it also implements Fn.let name = String::from("Alice");
// Fn — borrows name immutably
let greet = || println!("Hello, {name}");
greet(); greet(); // can call multiple times
// FnMut — borrows mutably
let mut count = 0;
let mut increment = || { count += 1; };
increment(); increment();
// FnOnce — takes ownership (move)
let name = String::from("Bob");
let consume = move || {
println!("Consumed: {name}");
drop(name); // name is moved into closure
};
consume();
// consume(); // ERROR: FnOnce — already called
// Iterator chaining — zero-cost abstraction
let result: Vec<i32> = (1..=10)
.filter(|x| x % 2 == 0) // keep evens
.map(|x| x * x) // square them
.collect(); // [4, 16, 36, 64, 100]
11. Unsafe Rust Advanced
What can you do in
unsafe that you can't in safe Rust?Five capabilities: (1) Dereference raw pointers, (2) Call unsafe functions/methods, (3) Access/modify mutable static variables, (4) Implement unsafe traits, (5) Access fields of unions.
unsafe does NOT turn off the borrow checker — it only unlocks these five operations. The programmer takes responsibility for upholding invariants.// Raw pointers — created safely, dereferenced unsafely
let mut num = 42;
let r1 = &num as *const i32; // immutable raw pointer
let r2 = &mut num as *mut i32; // mutable raw pointer
unsafe {
println!("r1: {}", *r1); // dereference raw pointer
*r2 = 100; // write through raw pointer
}
// FFI — calling C functions
extern "C" {
fn abs(input: i32) -> i32;
}
let result = unsafe { abs(-3) }; // 3
// Safe abstraction over unsafe code
fn split_at_mut(values: &mut [i32], mid: usize) -> (&mut [i32], &mut [i32]) {
let len = values.len();
let ptr = values.as_mut_ptr();
assert!(mid <= len);
unsafe {
(
std::slice::from_raw_parts_mut(ptr, mid),
std::slice::from_raw_parts_mut(ptr.add(mid), len - mid),
)
}
}
12. Macros Intermediate
What are the two kinds of macros in Rust?
Declarative macros (
macro_rules!) use pattern matching on token trees for code generation. Procedural macros (derive macros, attribute macros, function-like macros) operate on the AST and run Rust code at compile time to generate code. Proc macros are more powerful but require a separate crate.// Declarative macro
macro_rules! hashmap {
($($key:expr => $val:expr),* $(,)?) => {{
let mut map = std::collections::HashMap::new();
$( map.insert($key, $val); )*
map
}};
}
let scores = hashmap! {
"Alice" => 95,
"Bob" => 87,
};
// Derive macro (procedural) — most common proc macro type
#[derive(Debug, Clone, Serialize, Deserialize)]
struct User {
name: String,
age: u32,
}
13. Memory Layout Advanced
How does Rust lay out structs and enums in memory?
By default (
repr(Rust)), the compiler may reorder fields to minimize padding. #[repr(C)] uses C-compatible layout. Enums use a discriminant (tag) plus the size of the largest variant. The compiler performs niche optimization — e.g., Option<&T> is the same size as &T because null is used as the None discriminant.use std::mem;
// Niche optimization
assert_eq!(mem::size_of::<&i32>(), 8); // 8 bytes (64-bit)
assert_eq!(mem::size_of::<Option<&i32>>(), 8); // still 8! None = null ptr
assert_eq!(mem::size_of::<Option<bool>>(), 1); // 1 byte (bool has niches)
assert_eq!(mem::size_of::<Option<Option<bool>>>(), 1); // still 1!
// Zero-Sized Types (ZSTs)
assert_eq!(mem::size_of::<()>(), 0);
assert_eq!(mem::size_of::<Vec<()>>(), 24); // Vec metadata, no actual data
// Struct layout
#[repr(C)]
struct Packed { a: u8, b: u32, c: u8 } // 12 bytes (with padding)
struct Reordered { a: u8, b: u32, c: u8 } // 8 bytes (compiler reorders)
14. Design Patterns Intermediate
Newtype Pattern
// Type safety without runtime cost
struct Meters(f64);
struct Kilometers(f64);
fn travel(distance: Kilometers) { /* ... */ }
// travel(Meters(100.0)); // ERROR: expected Kilometers, got Meters
Builder Pattern
struct Server { host: String, port: u16, max_conn: usize }
struct ServerBuilder { host: String, port: u16, max_conn: usize }
impl ServerBuilder {
fn new() -> Self {
Self { host: "0.0.0.0".into(), port: 8080, max_conn: 100 }
}
fn host(mut self, host: &str) -> Self { self.host = host.into(); self }
fn port(mut self, port: u16) -> Self { self.port = port; self }
fn build(self) -> Server {
Server { host: self.host, port: self.port, max_conn: self.max_conn }
}
}
let server = ServerBuilder::new().host("localhost").port(3000).build();
Typestate Pattern
// Compile-time state machine — invalid states are unrepresentable
struct Locked;
struct Unlocked;
struct Door<State> { _state: std::marker::PhantomData<State> }
impl Door<Locked> {
fn unlock(self) -> Door<Unlocked> {
println!("Unlocking...");
Door { _state: std::marker::PhantomData }
}
}
impl Door<Unlocked> {
fn open(&self) { println!("Opening door"); }
fn lock(self) -> Door<Locked> {
Door { _state: std::marker::PhantomData }
}
}
let door = Door::<Locked> { _state: std::marker::PhantomData };
// door.open(); // ERROR: no method `open` for Door<Locked>
let door = door.unlock();
door.open(); // OK!
15. Testing Core
// Unit tests — in the same file
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_add() {
assert_eq!(add(2, 3), 5);
}
#[test]
#[should_panic(expected = "divide by zero")]
fn test_divide_by_zero() {
divide(10, 0);
}
#[test]
fn test_result() -> Result<(), Box<dyn std::error::Error>> {
let val: i32 = "42".parse()?;
assert_eq!(val, 42);
Ok(())
}
}
// Integration tests — in tests/ directory
// tests/integration_test.rs
use my_crate::public_function;
#[test]
fn integration_test() {
assert!(public_function(5) > 0);
}
// Property-based testing with proptest
use proptest::prelude::*;
proptest! {
#[test]
fn doesnt_crash(s in "\\PC*") {
let _ = parse(&s); // should never panic
}
}
16. Performance Intermediate
What are zero-cost abstractions in Rust?
Rust's high-level abstractions (iterators, closures, traits, generics) compile down to the same machine code you'd write by hand. Iterator chains like
.map().filter().collect() produce optimized loop code with no heap allocations or virtual dispatch. This is achieved through monomorphization (generics) and inlining.Common Performance Tips
| Technique | When | Example |
|---|---|---|
Use &str over String | Read-only access | fn process(s: &str) |
Use Cow<str> | Maybe-clone scenarios | Parse input, clone only on modification |
Vec::with_capacity(n) | Known size | Avoid reallocations |
collect() with type hint | Iterator chains | .collect::<Vec<_>>() |
#[inline] | Small hot functions | Cross-crate inlining hint |
Avoid .clone() | Hot paths | Borrow instead |
Use HashMap::entry | Insert-or-update | Single lookup instead of two |
| Profile before optimizing | Always | cargo flamegraph, perf |
17. Ecosystem & Tooling General
| Tool | Purpose | Command |
|---|---|---|
cargo | Build system & package manager | cargo build / run / test |
rustfmt | Code formatter | cargo fmt |
clippy | Lint collection (500+ lints) | cargo clippy |
rust-analyzer | IDE/LSP support | VS Code extension |
miri | Undefined behavior detector | cargo +nightly miri run |
cargo-expand | View macro expansion | cargo expand |
cargo-flamegraph | CPU profiling | cargo flamegraph |
cargo-audit | Security vulnerability scan | cargo audit |
cargo-deny | License & dependency policy | cargo deny check |
Must-Know Crates
| Category | Crate | Purpose |
|---|---|---|
| Serialization | serde + serde_json | De/serialization framework |
| Async Runtime | tokio | Async runtime + utilities |
| HTTP Client | reqwest | Ergonomic HTTP client |
| Web Framework | axum / actix-web | Web server frameworks |
| CLI | clap | Command-line argument parsing |
| Error Handling | thiserror / anyhow | Error types and context |
| Logging | tracing | Structured logging + spans |
| Database | sqlx / diesel | Async SQL / ORM |
| Testing | proptest / rstest | Property-based / parameterized tests |
| Parallelism | rayon | Data parallelism for iterators |
18. Coding Challenges Practice
Challenge 1: Implement a Thread-Safe Cache
use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use std::hash::Hash;
#[derive(Clone)]
struct Cache<K, V> {
store: Arc<RwLock<HashMap<K, V>>>,
}
impl<K: Eq + Hash + Clone, V: Clone> Cache<K, V> {
fn new() -> Self {
Self { store: Arc::new(RwLock::new(HashMap::new())) }
}
fn get(&self, key: &K) -> Option<V> {
self.store.read().unwrap().get(key).cloned()
}
fn set(&self, key: K, value: V) {
self.store.write().unwrap().insert(key, value);
}
fn get_or_insert_with(&self, key: K, f: impl FnOnce() -> V) -> V {
// Try read first (no write lock needed)
if let Some(val) = self.store.read().unwrap().get(&key) {
return val.clone();
}
// Cache miss — acquire write lock
let mut store = self.store.write().unwrap();
// Double-check (another thread may have inserted)
store.entry(key).or_insert_with(f).clone()
}
}
Challenge 2: Flatten a Nested Iterator
fn flatten<I>(iter: I) -> Flatten<I>
where
I: Iterator,
I::Item: IntoIterator,
{
Flatten { outer: iter, inner: None }
}
struct Flatten<I: Iterator>
where I::Item: IntoIterator
{
outer: I,
inner: Option<<I::Item as IntoIterator>::IntoIter>,
}
impl<I> Iterator for Flatten<I>
where
I: Iterator,
I::Item: IntoIterator,
{
type Item = <I::Item as IntoIterator>::Item;
fn next(&mut self) -> Option<Self::Item> {
loop {
if let Some(ref mut inner) = self.inner {
if let Some(item) = inner.next() {
return Some(item);
}
}
let next_inner = self.outer.next()?.into_iter();
self.inner = Some(next_inner);
}
}
}
// Usage: flatten(vec![vec![1,2], vec![3,4]].into_iter()) → [1,2,3,4]
Challenge 3: Implement a Simple Linked List
type Link<T> = Option<Box<Node<T>>>;
struct Node<T> { val: T, next: Link<T> }
struct List<T> { head: Link<T> }
impl<T> List<T> {
fn new() -> Self { Self { head: None } }
fn push(&mut self, val: T) {
let new_node = Box::new(Node {
val,
next: self.head.take(), // take ownership of old head
});
self.head = Some(new_node);
}
fn pop(&mut self) -> Option<T> {
self.head.take().map(|node| {
self.head = node.next;
node.val
})
}
fn peek(&self) -> Option<&T> {
self.head.as_ref().map(|node| &node.val)
}
}
// Proper Drop to avoid stack overflow on deep lists
impl<T> Drop for List<T> {
fn drop(&mut self) {
let mut cur = self.head.take();
while let Some(mut node) = cur {
cur = node.next.take(); // iterative drop, not recursive
}
}
}
19. System Design in Rust Senior
When would you choose Rust for a project over Go, Python, or C++?
Choose Rust when you need: (1) Memory safety without garbage collection pauses (embedded, real-time). (2) High performance + safety (network services, databases, game engines). (3) Fearless concurrency (data-parallel processing). (4) Long-running services where GC pauses are unacceptable. (5) WebAssembly targets. (6) Security-critical code (crypto, OS kernel modules). Avoid Rust for rapid prototyping, scripting, or when team Rust expertise is low.
Rust in Production: Common Architectures
| Use Case | Stack | Companies |
|---|---|---|
| Web API | Axum + SQLx + Tokio | Cloudflare, Discord |
| CLI Tools | Clap + Serde + Crossterm | Starship, ripgrep, bat, fd |
| Systems / Infra | Custom + Tokio + Serde | AWS (Firecracker), Meta, Dropbox |
| Embedded | no_std + embassy + HAL | Oxide Computer, Framework Laptop |
| Blockchain | Substrate / Solana runtime | Polkadot, Solana |
| Databases | Custom B-tree + io_uring | TiKV, SurrealDB, Qdrant |
| WebAssembly | wasm-bindgen + wasm-pack | Figma, 1Password |
20. Cheat Sheet
Quick Reference: Ownership & Borrowing
| Scenario | Code | What Happens |
|---|---|---|
| Move | let b = a; | a is invalid (non-Copy types) |
| Copy | let b = a; | a is still valid (Copy types) |
| Immutable borrow | let r = &a; | Multiple allowed simultaneously |
| Mutable borrow | let r = &mut a; | Exclusive — no other borrows allowed |
| Clone | let b = a.clone(); | Deep copy — both valid |
Quick Reference: Common Conversions
| From | To | Method |
|---|---|---|
&str | String | s.to_string() or String::from(s) or s.to_owned() |
String | &str | &s or s.as_str() |
&str | i32 | s.parse::<i32>()? |
i32 | String | n.to_string() |
Vec<T> | &[T] | &v or v.as_slice() |
&[T] | Vec<T> | s.to_vec() |
Option<T> | Result<T, E> | opt.ok_or(err)? |
Result<T, E> | Option<T> | res.ok() |
Quick Reference: Lifetime Annotations
| Syntax | Meaning |
|---|---|
&'a T | Reference valid for at least lifetime 'a |
&'static T | Reference valid for entire program |
T: 'a | T contains no references shorter than 'a |
T: 'static | T owns all its data (or refs are 'static) |
for<'a> | Higher-ranked trait bound (HRTB) — works for ANY lifetime |