Traditional monolithic systems are quickly giving way to more modern approaches. Distributed architectures are taking over to tackle the challenges and complexities of modern businesses. These applications typically involve services that can cater to the changing business requirements in this fast-paced technological world and work seamlessly together. However, that's not the whole picture. These distributed systems often continue to rely on a single technology stack or programming language, which can become insufficient to handle the complexities of business operations.

Enter polyglot architecture.

In this article, I'll examine how you can build a distributed analytics platform using Rust and C# that requires ultra-fast ingestion and transformation of telemetry data together with reporting capabilities. Although Rust microservices will handle real-time data ingestion, transformation, and ultra-fast data processing pipelines, C# microservices running on ASP.NET Core platform will enforce security, and provide access to integration endpoints (REST/gRPC), management dashboards, and execute business logic.

This article discusses the synergy between Rust and .NET to build cloud-native ecosystems with polyglot microservices. It also discusses the best strategies for interoperability between Rust and C# via gRPC, REST, or message brokers (e.g., RabbitMQ, Kafka).

If you're to work with the code examples discussed in this article, you need the following installed in your system:

  • Visual Studio 2022
  • Visual Studio Code
  • .NET 10.0
  • ASP.NET 10.0 Runtime
  • RabbitMQ

If you don't already have Visual Studio 2022 installed on your computer, you can download it from here: https://visualstudio.microsoft.com/downloads/.

At the end of this journey, you'll be able to build scalable, high performant, secure, and distributed applications using .NET Aspire and Rust.

In today's digital world, technological advancements occur at an increasingly fast pace. On the other hand, the complexity of business requirements is also increasing at an unprecedented rate. All of this can push things to the point where one technology stack or programming language can no longer hold up. You need to use several technologies and leverage them to maximize their benefits. When you face such challenges, you may need to adopt a polyglot architecture to keep pace with such changes and handle the business changes that emerge.

The Rise of Distributed Architectures

In traditional applications, you'll often see the user interface, business logic, and data access components residing in a single location. Although building and deploying such applications was simple, over time, those systems turned into real bottlenecks. When you want to scale the application, you must grow the entire system at once. Deployments also became challenging, as they often required downtime to implement the changes.

Additionally, technology lock-in made it challenging to switch frameworks or languages when needed. These constraints led to the advent of distributed architectures: breaking the system into smaller, more manageable pieces. And there are benefits aplenty. One of the key advantages of such systems is that the components may scale on their own; they can scale independent of each other. The support for resilience and fault isolation can ensure that in the event of a crash of the components, it won't bring the entire application down. These systems can speed up delivery because you can create the components separately, often in parallel.

That said, there are certain downsides to going distributed as well. You have to handle the challenges in network latency, data consistency, service discovery, observability, and versioning.

What Is a Polyglot Architecture?

A polyglot architecture is a conglomeration of different technologies used to build the components of a system, providing the flexibility to select the best technology for building a specific component of the system. Essentially, it's an architecture that enables you to build a component using your desired technology stack. Figure 1 illustrates a typical polyglot architecture.

Figure 1: A typical polyglot architecture
Figure 1: A typical polyglot architecture

Benefits and Downsides of Polyglot Architectures

Here are the key benefits of polyglot architectures:

  • Flexibility: A polyglot architecture is a suitable fit for both large companies and small startups, as it effectively addresses the diverse and varied needs of businesses. A polyglot architecture enables them to step into almost any space, including big data, IoT, AI, cloud computing, and data analysis, among others.
  • Fosters creativity: A polyglot architecture gives you some guidelines for how to structure your microservices, and it also lets engineers get their hands dirty and make their own decisions about which tools to use. This unleashes their creativity and boosts their sense of ownership, and that can lead to some great solutions for the business.
  • Faster time to market: With a polyglot architecture, teams can use the tech stack that makes them most comfortable. This leads to more rapid development and a quicker time to market.
  • Enforces building a talent pool: A polyglot architecture enables you to select from a broader pool of potential candidates for your team, including individuals with skills in .NET, C++, Scala, Go, Python, Node.js, or Java.

Although polyglot architecture has several benefits, it also comes with some downsides, such as:

  • Complexity: A polyglot architecture requires a reasonable degree of teamwork and coordination, and it can get complicated when you've got multiple languages and tech stacks to juggle.
  • Standardization of contracts: Standardizing contracts between microservices is an issue, especially when different teams use different programming languages.

Challenges in Building Polyglot Distributed Systems

Despite the allure of polyglot architecture, distributed applications that use polyglot architectures have specific challenges:

  • Operational complexity and governance: Managing teams across diverse technology stacks requires strong governance and operational effort. Most importantly, the usage of different runtimes requires diverse monitoring tools. Although implementing observability in such systems is critical, it's equally challenging.
  • Data consistency: Maintaining data consistency across diverse platforms that use heterogeneous storage mechanisms is challenging. As a result, this may lead to data inconsistency, and your data may become stale if you're not able to manage data spread across distributed data stores.
  • Security and access control: Implementing security policies and access control across multiple platforms can be challenging. Because there are numerous runtimes in this architecture, polyglot distributed systems can have several attack surfaces. As a result, these may increase security risks and vulnerabilities.

An Introduction to Polyglot Microservices

Polyglot microservices refer to an architectural style in which each microservice is a conglomeration of a diverse set of technologies, programming languages, and frameworks. This approach originates from polyglot programming, where we use different languages to leverage their strengths.

A typical polyglot microservice is a self-contained unit that is:

  • Written in a different programming language (e.g., C#, Go, Python, Node.js, Rust, etc.)
  • Deployed on different runtimes (e.g., .NET, JVM, Node, Python runtime)
  • Stores data in a database type that is specific to it (i.e., SQL Server, Oracle, NoSQL, etc.)
  • Managed and scaled independently together with its dependencies

Advantages of Polyglot Microservices

The key benefits of polyglot microservices include the following:

  • Technology choice: One of the key benefits of polyglot microservices is its support for making the optimal technology choice. By using polyglot microservices, you can select the most suitable language or tool for each service, thereby maximizing efficiency and throughput.
  • Developer productivity: By using polyglot microservices, developers can be more productive, as they use languages, frameworks, and tools with which they are already familiar. As a result, this boosts their creativity and also raises productivity levels. Using polyglot architectures enables the adoption of newer languages, tools, and technologies.
  • Separation of concerns: Polyglot microservices promote a clear separation of concerns where each microservice models a specific business domain. In a typical polyglot architecture, each service can be independently scaled, deployed, and maintained, thereby improving fault isolation and operational flexibility.
  • Flexibility: When using polyglot microservices, you can use the languages and technologies they're most proficient in—you're not locked into one technology. Because different runtimes and databases can coexist, the system is less dependent on any single vendor ecosystem (i.e., Microsoft, Oracle, Google, AWS, etc.).

Introducing .NET Aspire

.NET Aspire is a modern, opinionated stack for building cloud-native, distributed, and observable .NET applications. It provides unified tooling, templates, and libraries to simplify orchestration, configuration, and monitoring across multiple services and resources in a solution, making local and cloud deployment much easier and more consistent.

.NET Aspire delivers tools, templates, and packages for building production-ready distributed apps, centered around a code-first application model called AppHost, which defines services, resources, and their relationships. Aspire enables seamless orchestration, local development and deployment to Kubernetes, cloud platforms, or on-prem infrastructure.

Key Features

The key features of .NET Aspire include:

  • Cross-platform support: .NET Aspire enables you to build applications that can run on Windows, Linux, and macOS environments.
  • Unified orchestration: AppHost automates the wiring, configuration and startup order of all your app services and resources.
  • Standardized integrations: NuGet packages and templates for popular services (e.g. PostgreSQL, Redis) for consistent configuration and connectivity.
  • Seamless integration and built-in support for observability: Integrated logging, tracing, metrics and health checks via OpenTelemetry and ServiceDefaults for system-wide monitoring.
  • Developer-friendly: Provides pre-built templates and libraries and is adaptable for any infrastructure, extensible with your custom workflows.
  • Code-first orchestration: AppHost allows you to define and orchestrate APIs, front-ends, databases, caches, and message brokers with simple builder patterns in code.
  • Configuration management: Automatically manages ports, connection strings, environment variables and secrets for all services.
  • Integrated monitoring dashboard: Aspire provides a visual dashboard for viewing logs, traces and health checks, simplifying system-wide observability.
  • Health checks and service discovery: Built-in support for health checks and discovery logic for resilience and self-healing in distributed systems.

Understanding the .NET Aspire Project Structure

When you create a new .NET Aspire application in Visual Studio, the following projects are automatically created in it:

  • ApiService: A component that represents the back-end API provider, which handles tasks such as data access, business logic, and communication between the web application in the presentation layer and the database.
  • AppHost: Within a .NET Aspire application, this component is responsible for coordinating project execution, managing dependencies, configuration, and also coordinating project execution and facilitating the integration between different components in the application.
  • ServiceDefaults: This component enables you to configure dependencies, facilitate better scalability, and easier maintenance of your application.
  • Web: This component takes advantage of Blazor to provide a responsive user interface, handling user interactions, and display data retrieved from back-end services.

Best Practices for Building Cloud Applications with .NET Aspire

Here are a few best practices you should keep in mind when building your cloud applications with .NET Aspire:

  • Design for scalability and resilience: Systems designed for scalability and resilience can handle increased load and recover from failures through techniques like load balancing, distributed architectures, and redundancy. Breaking an application down into numerous smaller services—called microservices—helps improve scalability and reduces the likelihood that a problem with one part of the application will bring the whole application down.
  • CI/CD automation: You can use Azure DevOps or GitHub Actions to integrate your Aspire projects with CI/CD pipelines to streamline the build, test, and deployment processes. You can automate the entire process of getting your application into production with Azure DevOps and GitHub Actions. Use tools like ARM templates or Terraform to define your infrastructure, so you can maintain consistency and make it easier to manage version control.
  • Security and compliance: You can leverage Aspire to enforce security, such as ensuring that communication between services is secure, managing secrets safely, and complying with cloud security best practices, including encryption, authentication, and role-based access controls.
  • Regular health checks: Conduct periodic health checks for each microservice in your application to ensure reliability, scalability, performance, facilitate faster troubleshooting, and enable automated responses in the event of service failures. By monitoring the health of each service, systems can automatically detect the availability and health of a service and take appropriate action if any issues arise.
  • Built-in support for observability: Observability is the ability to analyze logs, metrics, and traces for insight into a system's internal state. Because .NET Aspire provides support for observability out of the box, you can collect logs, traces, and metrics to monitor and debug your applications seamlessly.

What Is Rust? Why Do You Need It?

Rust is a popular systems programming language that's designed to provide safety, performance, and concurrency while at the same time eliminating common bugs and memory errors found in C/C++ and adding modern language features and cross-platform support.

Rust can eliminate memory corruption and data races, common in unmanaged languages like C/C++, by enforcing safety at compile time. Besides, you can leverage Rust to build scalable, reliable, and efficient systems without garbage collector overhead. Rust is a great choice in security sensitive, low latency, and high throughput environments (operating systems, cloud services, networking).

Typical Use Cases of Rust

Here are some of the use cases of Rust:

  • Building operating systems: Rust can be used to build device drivers, kernels, and even operating systems. Incidentally, the Redux operating system was built in Rust programming language.
  • Internet of things: Rust is a great choice for IoT systems that typically have limited resources, i.e., where memory and concurrency matter.
  • Data processing: Rust is a great choice in data processing applications where fast and concurrent event streaming is required.
  • Network programming: Rust is also a good choice for building networking applications because of its support for memory safety, concurrency and low-level access capabilities

Key Features

The key features of Rust include the following:

  • Type-safe and memory-safe: When working with Rust, you won't have to worry about race conditions and null pointer errors without a garbage collector.
  • High performance: Rust compiles to native code, so it's good for performance critical tasks.
  • Concurrency: Rust encompasses built in language features for thread safety on multi core hardware.
  • Zero cost abstractions: High-level constructs don't incur runtime overhead.
  • Modern tooling: Rust includes a powerful package manager named Cargo and robust documentation capabilities for better productivity.

Why Combine Rust and C# in Distributed Systems?

Combining Rust and C# can help you build distributed systems by leveraging their capabilities, such as the following:

  • Use Rust for better performance: Rust can handle low-level, CPU-bound or latency-sensitive operations (e.g., custom network protocols, storage engines). Use Rust when you need to build high-performance secure systems.
  • Use C# for rapid application development: C# is good for building business logic, cloud APIs, GUIs, and integrates well with .NET cloud stacks.

Because you can always call Rust modules from C# via interop, you can have a hybrid approach that combines Rust speed and safety with .NET's rich ecosystem and rapid development.

Anatomy of a Rust Application

A typical Rust application has a project structure built around Cargo, which, incidentally, is Rust's package manager and build system. The main files and folders in a typical Rust project include the following:

  • cargo.toml: The manifest file located at the root of the project, comprising the project's metadata (name, version, etc.), dependencies (external crates), and the necessary build settings.
  • Cargo.lock: An automatically generated file that locks the dependency versions used during a build to ensure reproducibility.
  • src: Represents the source directory in your Rust application's project structure and is comprised of the following files:
  • main.rs: The main entry point for binary (executable) projects, containing the fn main() function.
  • Lib.rs: The root of a library crate if the project is a library or both a library and a binary.
  • Additional .rs files or folders: You can create additional .rs files or folders within the src folder, depending on the modules or submodules you need to organize your code.

In the Rust programming language, you can use modules to organize the source code in a project. You can manage the code in several ways, such as the following:

  • Start with a single file (e.g., main.rs) and have all the logic in it.
  • Modularize the source code of your Rust application by splitting it into modules.
  • Split these modules into separate files; for example, a module named helloworld in main.rs corresponds to a helloworld.rs file in src folder.

Getting Started with Rust

In this section, I'll examine how you can get started with Rust.

Step 1: Download the Rust Installer

Download the Rust installer from the official Rust installation page: https://rust-lang.org/tools/install/

Download the correct installer based on the processor architecture and the operating system you're using. In this article, I'll assume that you're using a computer system with a 64-bit processor and Windows 10 or 11 installed.

Step 2: Execute the Rust Installer

Once you've downloaded the Rust installer, double-click on the installer executable file to start the installation process. When the Rust installer starts, you'll be presented with the following options:

  1. Proceed with default installation (recommended)
  2. Custom installation
  3. Cancel installation

Step 3: Continue Installation

Press 1 to proceed with the default installation as shown in Figure 2. When you press 1, the Visual Studio Installer window will be launched.

Figure 2: Installing Rust in Windows
Figure 2: Installing Rust in Windows

The Visual Studio Installer will download and install the additional components you'll need for running Rust in your computer system.

Step 4: Complete the Installation

Once the additional components have been installed, type 1 in the console window again to continue the remaining part of the installation process.

Once Rust has been successfully installed on your computer, you can verify the installation by executing the following command at the command window:

rustc --version

Remember that to work with Rust in Windows, you'll need to install two additional components: MSVC v143 – VS 2022 C++, Windows 11 SDK (10.0.22000.0)

Install the Rust-Analyzer Extension

Now that you've installed Rust in your computer, install the rust-analyzer extension. This extension provides support for working with Rust in Visual Studio Code. To install this extension in Visual Studio Code, press Ctrl + Shift + X keys together to launch the Extensions Marketplace in Visual Studio Code. Now, select and install the release version of this extension, as shown in Figure 3.

Figure 3: Installing the rust-analyzer extension in Visual Studio Code.
Figure 3: Installing the rust-analyzer extension in Visual Studio Code.

Writing Your First Rust Program in Visual Studio Code

To write your Rust program, you can take advantage of Cargo to scaffold a new project. To do this, type cargo new and the name of the project you'd like to create, as shown in the code snippet below:

cargo new rust_example

You can launch Visual Studio Code by running the following piece of code at the terminal window:

cd rust_example
code .

The main.rs file is the main entry point of the program and contains the main() function. You can build your Rust program using cargo build. To do this, press the Ctrl + Shift + ` keys together to launch the terminal window and then type cargo build in there, as shown in the code snippet below.

cargo build

Once the Rust program has compiled successfully, you can run the application by executing the following command and at the terminal window.

cargo run

Create a Rust Application to Display System Metrics

In this section, you'll implement a Rust program that displays CPU, memory, and network, and disk usage in your computer every five seconds. Create a new Rust project by executing the following commands at the terminal window.

cargo system_monitor
.code

In this application, you'll create a cpu.rs, memory.rs, network.rs, and disk.rs files where each of these files will have the necessary logic to retrieve CPU usage information, memory usage information, and so on. In the main.rs file, first import the required crates and traits, as shown in the code snippet below:

use sysinfo::{System, SystemExt};

use chrono::{Local, DateTime};

use tokio::time::{sleep, Duration};

Create a new file named cpu.rs and write the following code in there to get CPU usage percentage for each core.

use sysinfo::SystemExt;
use sysinfo::{System, CpuExt,};
pub struct CpuStats {
    pub avg_cpu_usage: f32,
}
pub fn get_cpu_stats
(sys: &System) -> CpuStats {
    let cpu_usages: Vec<f32> = sys.cpus()
        .iter()
        .map(|cpu| cpu.cpu_usage())
        .collect();
    let avg_cpu_usage = 
if cpu_usages.is_empty() {
        0.0
    } else {
        cpu_usages.iter().sum::<f32>() / 
cpu_usages.len() as f32
    };
    CpuStats {
        avg_cpu_usage,
    }
}

Create another file named memory.rs and write the following code in there to access memory usage data in your computer:

use sysinfo::SystemExt;
use sysinfo::System;

pub struct MemoryStats {
    pub total_memory_mb: u64,
    pub used_memory_mb: u64,
}

pub fn get_memory_stats(sys: &System) -> MemoryStats {
    MemoryStats {
        total_memory_mb: sys.total_memory() / 1024,
        used_memory_mb: sys.used_memory() / 1024,
    }
}

Next, create another file named network.rs and write the following code in there to access network usage information in your computer:

use sysinfo::{System, NetworksExt, NetworkExt, SystemExt};

pub struct NetworkStats {
    pub received_mb: f64,
    pub transmitted_mb: f64,
}

pub fn get_network_stats(sys: &System) -> NetworkStats {
    let networks = sys.networks();
    let received_mb = networks
        .iter()
        .map(|(_, data)| data.total_received() as f64 / 1_000_000.0)
        .sum();
    let transmitted_mb = networks
        .iter()
        .map(|(_, data)| data.total_transmitted() as f64 / 1_000_000.0)
        .sum();
    NetworkStats {
        received_mb,
        transmitted_mb,
    }
}

Likewise, the following code snippet in the disk.rs file can be used to collect disk usage information.

use sysinfo::SystemExt;
use sysinfo::{System, DiskExt};

pub struct DiskStats {
    pub name: String,
    pub mount_point: String,
    pub total_gb: f64,
    pub free_gb: f64,
}

pub fn get_disk_stats(sys: &System) -> Vec<DiskStats> {
    sys.disks()
        .iter()
        .map(|disk| DiskStats {
            name: disk.name().to_string_lossy().into_owned(),
            mount_point: disk.mount_point().to_string_lossy().into_owned(),
            total_gb: disk.total_space() as f64 / 1_000_000_000.0,
            free_gb: disk.available_space() as f64 / 1_000_000_000.0,
        })
        .collect()
}

To display the CPU, memory, network, and disk usage information, write the following piece of code in the main() function in your main.rs file.

sys.refresh_cpu();
sleep(Duration::from_millis(100)).await;
sys.refresh_components();
sys.refresh_memory();
sys.refresh_all();

let timestamp = Local::now();

let cpu_stats = get_cpu_stats(&sys);
let mem_stats = get_memory_stats(&sys);
let net_stats = get_network_stats(&sys);
let disk_stats = get_disk_stats(&sys);

println!("Timestamp: {}", timestamp.format("%Y-%m-%d %H:%M:%S"));
println!("CPU Usage (average across all cores): {:.2}%", pu_stats.avg_cpu_usage);

println!();
println!("Memory Usage:");
println!("  Total: {} MB", mem_stats.total_memory_mb);
println!("  Used : {} MB", mem_stats.used_memory_mb);
println!();

The complete source code of the main.rs file is given in Listing 1.

Listing 1: The main.rs file pertaining to the System Monitor application

mod cpu;
mod memory;
mod disk;
mod network;

use chrono::Local;
use sysinfo::{System, SystemExt};
use tokio::time::{sleep, Duration};

use cpu::get_cpu_stats;
use memory::get_memory_stats;
use disk::get_disk_stats;
use network::get_network_stats;

#[tokio::main]
async fn main() {
    let mut sys = System::new_all();

    loop {
        sys.refresh_cpu();
        sleep(Duration::from_millis(100)).await;

        sys.refresh_components();
        sys.refresh_memory();
        sys.refresh_all();

        let timestamp = Local::now();

        let cpu_stats = get_cpu_stats(&sys);
        let mem_stats = get_memory_stats(&sys);
        let net_stats = get_network_stats(&sys);
        let disk_stats = get_disk_stats(&sys);

        println!("Timestamp: {}", timestamp.format("%Y-%m-%d %H:%M:%S"));
        println!(
            "CPU Usage (average across all cores): {:.2}%",
            cpu_stats.avg_cpu_usage
        );

        println!();

        println!("Memory Usage:");
        println!("  Total: {} MB", mem_stats.total_memory_mb);
        println!("  Used : {} MB", mem_stats.used_memory_mb);

        println!();

        println!("Network Usage:");
        println!("  Received: {:.2} MB", net_stats.received_mb);
        println!("  Transmitted: {:.2} MB", net_stats.transmitted_mb);

        println!();

        println!("Disk Usage:");
        for disk in disk_stats {
            println!(
                "  {} mounted on {} - Total: {:.2} GB, Free: {:.2} GB",
                disk.name,
                disk.mount_point,
                disk.total_gb,
                disk.free_gb
            );
        }

        println!("------------------- -\n");

        sleep(Duration::from_secs(5)).await;
    }
}

Now, replace the autogenerated code of the cargo.toml file with the following piece of code.

[package]
name = "system_monitor"
version = "1.0.0"
edition = "2024"

[dependencies]
sysinfo = "0.29"
chrono = "0.4"
tokio = { version = "1", features = ["full"] }

When you run the program, the system metrics values will be displayed at the console, as shown in Figure 4.

Figure 4: Displaying CPU, Memory, Network, and Disk usage metadata
Figure 4: Displaying CPU, Memory, Network, and Disk usage metadata

The build.rs file represents a build script used for tasks like generating code, setting environment variables, compiling external resources, or complex build-time configuration steps.

Getting Started with .NET Aspire in Visual Studio

You can create a project in Visual Studio 2022 in several ways, such as from the Visual Studio 2022 Developer Command Prompt or by launching the Visual Studio 2022 IDE. When you launch Visual Studio 2022, you'll see the Start window. You can choose “Continue without code” to launch the main screen of the Visual Studio 2022 IDE.

Now that you know the basics, let's start setting up the project. To create a new ASP.NET Core 8 Project in Visual Studio 2022:

  1. Start the Visual Studio 2022 IDE.
  2. In the Create a new project window, select “Aspire Starter App”, and click Next to move on. Refer to Figure 5.
Figure 5: Creating a new project in .NET Aspire
Figure 5: Creating a new project in .NET Aspire
  1. Specify the project name as OMS and the path where it should be created in the Configure your new project window.
  2. If you want the solution file and project to be created in the same directory, you can optionally check the Place solution and project in the same directory checkbox. Click Next to move on.
  3. In the next screen, specify the target framework. Ensure that the “Configure for HTTPS” is checked and the checkbox “Use Redis for caching…” is unchecked because you won't use Redis in this example.
  4. Click Create to complete the process.

This creates a new .NET Aspire application in Visual Studio. At first glance, the Solution Explorer of this application looks like Figure 6.

Figure 6: The Solution Explorer Window
Figure 6: The Solution Explorer Window

By default, the AppHost is the starter application, i.e., when the application starts, the AppHost will be executed. Figure 7 shows how the .NET Aspire application looks in the web browser when you execute it.

Figure 7: The .NET Aspire application in execution
Figure 7: The .NET Aspire application in execution

Building a Real-time Data Processing Application Using Rust and C#

In this section, you'll implement a real-time data processing application using Rust and C#. By blending Rust and C# with .NET Aspire, organizations can achieve better performance, seamless integration, and enhanced scalability when working with real-time distributed applications.

Although Rust will be used to read order data, generate order data, and send it to a queue, the C# microservice will be used to read data from the queue, convert the data to C# CLR objects, and return them to the user interface. Finally, .NET Aspire will be used to connect the microservices, aggregate the data, and display it in the user interface. Figure 8 illustrates the complete flow of the application.

Figure 8: The complete flow of the application
Figure 8: The complete flow of the application

The ASP.NET Core APIs will be used for user authentication, and retrieve, aggregate, and present relevant data using dashboards or external clients. .NET Aspire orchestrates both runtimes, while automating service discovery, environment configuration, and observability.

Create the cargo.toml File

The following piece of code in the cargo.toml file is used to define the Cargo package metadata:

[package]
name = "orders_producer"
version = "1.0.0"
edition = "2024"

The following piece of code in the cargo.toml file specifies the required dependencies:

[dependencies]
chrono = { version = "0.4", features = ["serde"] }
tokio = { version = "1", features = ["full"] }
uuid = { version = "1.7", features = ["v4"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
lapin = "2.3"
rand = "0.8"

The complete source code of the cargotoml file is given here:

[package]
name = "orders_producer"
version = "1.0.0"
edition = "2024"

[dependencies]
chrono = { version = "0.4", features = ["serde"] }
tokio = { version = "1", features = ["full"] }
uuid = { version = "1.7", features = ["v4"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
lapin = "2.3"
rand = "0.8"

Create the Order struct

In the main.rs file, create a struct named Order with the following code:

#[derive(Debug, Serialize)]
struct Order {
    order_id: String,
    product_id: u32,
    customer_id: u32,
    order_date: String,
    status: String,
}

The Order struct is used to represent an order with fields such as order id, product id, customer id, order date, and status. Note how the Order struct derives Serialize to enable JSON serialization.

Generate Random Dates

The following code snippet shows how you create a new function named generate_random_date inside the main.rs file to generate random dates between two ranges: the start date and the end date.

fn generate_random_date(start: NaiveDate, end: NaiveDate) -> String {
    let days_between = (end - start).num_days() as u32;
    let random_days = rand::thread_rng().gen_range(0..=days_between);
    let date = start + Duration::days(random_days as i64);
    date.to_string()
}

Create a Connection to RabbitMQ

The following piece of code shows how you can set up a connection to RabbitMQ from your Rust program:

async fn configure_rabbitmq
(addr: &str, queue_name: &str) -> 
lapin::Channel {
    let connection = 
Connection::connect(addr, 
ConnectionProperties::default())
.await
.expect
("Error: Connectioned failed");
    let channel = 
connection.create_channel()
.await
.expect("Failed to create channel");
channel.queue_declare(
        queue_name,
        QueueDeclareOptions {
            durable: true,
            ..Default::default()
        },
        FieldTable::default()
    )
    .await
    .expect
("Error: Failed to declare the queue");
    channel
}

It should be noted that RabbitMQ does not allow declaring the same queue with conflicting parameters. If a queue is created with conflicting parameters, it returns a PRECONDITION_FAILED error. When creating a queue in RabbitMQ, you should set durable = true in your Rust code when declaring the queue if the queue was initially created as durable. You can do this in the queue_declare call that uses QueueDeclareOptions { durable: true, ..Default::default() }. This is used to ensure that the queue is durable, thereby preventing any PRECONDITION_FAILED error.

The following piece of code illustrates how you can create a new order with random values:

fn create_random_order(start_date: NaiveDate, 
                       end_date: NaiveDate, 
                       statuses: &[&str]) -> Order {
    Order {
        order_id: Uuid::new_v4().to_string(),
        product_id: rand::thread_rng().gen_range(1..=10),
        customer_id: rand::thread_rng().gen_range(1..=5),
        order_date: random_date(start_date, end_date).to_string(),
        status: statuses[rand::thread_rng().gen_range(0..statuses.len())]
          .to_string(),
    }
}

Publish the Order Records to RabbitMQ

The publish_order function given below shows how you can publish the generated order records to RabbitMQ.

async fn publish_order(channel: &lapin::Channel, 
                       queue_name: &str, 
                       order: &Order) {
    let payload = serde_json::to_vec(&order)
      .expect("Error: Failed to serialize order");

    channel
        .basic_publish(
            "",
            queue_name,
            BasicPublishOptions::default(),
            &payload,
            BasicProperties::default(),
        )
        .await
        .expect("Error: Failed to publish message")
        .await
        .expect("Error: Failed to confirm message");
}

Create the main() Function

Finally, the main function, given below, sets up a connection to RabbitMQ, iterates a loop to generate, and sends 50 order messages to RabbitMQ, printing each sent order with a brief delay to avoid flooding.

#[tokio::main]
async fn main() {
    let address = "amqp://guest:guest@localhost:5672//";
    let queue_name = "order_records";
    let channel = configure_rabbitmq(address, queue_name).await;

    let statuses = vec!["Pending", "Processing", "Shipped", 
      "Delivered", "Cancelled"];

    let start_date = NaiveDate::from_ymd_opt(2025, 1, 1).unwrap();
    let end_date = Utc::now().naive_utc().date();

    let total_records = 50;
    for _ in 0..total_records {
        let order = create_random_order(start_date, end_date, &statuses);
        publish_order(&channel, queue_name, &order).await;
        println!("Sent order: {:?}", order);
        sleep(TokioDuration::from_millis(50)).await;
    }
}

The complete source code of the main.rs file that contains the necessary code to set up a connection with RabbitMQ, is given in Listing 2.

Listing 2: The main.rs file pertaining to the OrderManagementSystem

use serde_json;
use chrono::{NaiveDate, Duration, Utc};
use rand::Rng;
use uuid::Uuid;
use serde::Serialize;
use lapin::{options::*, types::FieldTable, 
    BasicProperties, Connection, ConnectionProperties};
use tokio::time::{sleep, Duration as TokioDuration};

#[derive(Debug, Serialize)]
struct Order {
    order_id: String,
    product_id: u32,
    customer_id: u32,
    order_date: String,
    status: String,
}

fn generate_random_date
(start: NaiveDate, 
    end: NaiveDate) -> String {
    let days_between = (end - start).num_days() as u32;
    let random_days = rand::thread_rng().gen_range(0..=days_between);
    let date = start + Duration::days(random_days as i64);
    date.to_string()
}
async fn configure_rabitmq(addr: &str, 
    queue_name: &str) -> lapin::Channel {
    let connection = Connection::connect(addr, 
        ConnectionProperties::default()).await
          .expect("Error: Connection failed…");
    
    let channel = connection.create_channel().await
      .expect("Error: Failed to create channel");
        channel.queue_declare(
        queue_name,
        QueueDeclareOptions::default(),
        FieldTable::default(),
    ).await
     .expect("Error: Failed to declare queue");
    channel
}

fn create_random_order(start_date: NaiveDate, end_date: NaiveDate, 
statuses: &[&str]) -> Order {
    Order {
        order_id: Uuid::new_v4().to_string(),
        product_id: rand::thread_rng().gen_range(1..=10),
        customer_id: rand::thread_rng().gen_range(1..=5),
        order_date: generate_random_date(start_date, end_date),
        status: statuses[rand::thread_rng().gen_range
          (0..statuses.len())].to_string(),
    }
}

async fn publish_order(channel: &lapin::Channel, 
    queue_name: &str, order: &Order) {
    let payload = serde_json::to_vec(&order).
    expect("Error: Failed to serialize order");
    channel.basic_publish(
        "", 
        queue_name, BasicPublishOptions::default(),
        &payload, BasicProperties::default(),
    )
    .await
    .expect("Error: Failed to publish message to the queue")
    .await
    .expect("Error: Failed to confirm message");
}

#[tokio::main]
async fn main() {
    let address = "amqp://guest:guest@localhost:5672//";
    let queue_name = "orders.queue";
    let channel = configure_rabbitmq(address, queue_name).await;
    let statuses = vec!["Pending", "Processing", "Shipped", 
      "Delivered", "Cancelled"];
  
    let start_date = NaiveDate::from_ymd_opt(2025, 1, 1).unwrap();
    let end_date = Utc::now().naive_utc().date();
    let total_records = 50;
    for _ in 0..total_records {
        let order = create_random_order(start_date, end_date, &statuses);
        publish_order(&channel, queue_name, &order).await;
        println!("Sent order: {:?}", order);
        sleep(TokioDuration::from_millis(50)).await;
    }
}

Using Protobuf Instead of Struct to Reduce Payload

Protocol Buffers is a language and platform-neutral, fast, compact serialization format from Google used for serializing data. Protobuf.Net is a .NET library that allows you to serialize and deserialize data in the Google Protocol Buffers format. Protobuf is supported by several languages such as C++, Java, C#, Python, Ruby, Objective-C, Go, JS, etc.

There are several benefits in using Protocol Buffers instead of structs in this example, such as the following:

  • Fast and efficient serialization
  • Strong typing
  • Schema versioning
  • Cross-platform compatibility
  • Language neutrality
  • Smaller payloads

Any protobuf file should have a .proto extension. A typical .proto file is structured as follows:

syntax = "proto3";

option csharp_namespace = "MyProtobufDemo.Protos";

message Customer {
    uint64 id = 1;
    string firstname = 2;
    string lastname = 3;
    bool is_active = 4;
}

The first statement in a .pro file specifies the version of Protobuf in use. This is followed by an option statement that specifies the name of the namespace. To define your data elements, use the message keyword. You can read more on Protobuf from this article. https://www.codemag.com/Article/2212071/A-Deep-Dive-into-Working-with-gRPC-in-.NET-6

To implement Protocol Buffers in this example, follow the steps outlined below.

Step 1: Create a New Rust Project

Create a new Rust project using the following commands:

cargo new rust_metrics_producer
cd rust_metrics_producer

Step 2: Create the .proto File

Open the newly created Rust project in Visual Studio Code and create a new .proto file inside the src folder with the following content inside it:

syntax = "proto3";

package order;

message Order {
    string order_id = 1;
    uint32 product_id = 2;
    uint32 customer_id = 3;
    string order_date = 4;
    string status = 5;
}

The orders.proto file uses proto3 syntax and comprises the fields order_id, product_id, customer_id, order_date, and status. At the time of compilation, the .proto file is used to generate Rust structs and encode/decode logic.

Step 3: Create a build.rs File and Add It at the Project Root

Create build.rs file at the root folder of the project and write the following code in there:

fn main() {
    prost_build::compile_protos(&["src/order.proto"], &["src/"])
        .expect("Error: Failed to compile protobuf definitions");
}

The build.rs file contains the build script that will be executed by Cargo during the build process. The build.rs file uses prost-build to compile the orders.proto file, and returns success or error based on the outcome of the compilation process.

Step 4: Update the Cargo.toml File

Update the cargo.toml file with the following content:

[package]
name = "orders_producer"
version = "0.1.0"
edition = "2021"
build = "build.rs"

[build-dependencies]
prost-build = "0.11"

[dependencies]
chrono = { version = "0.4", features = ["serde"] }
lapin = "2.3"
prost = "0.11"
rand = "0.8"
tokio = { version = "1", features = ["full"] }
uuid = { version = "1.7", features = ["v4"] }

Step 5: Update the main.rs File

Here is how protobuf has been used in the main.rs file for high performance data exchange. The instances of the struct order are serialized into the protobuf format. These serialized instances are sent to RabbitMQ as raw bytes, enabling structured, type-safe, and cross-language message sharing and interoperability in distributed systems. Update the main.rs file with the source code given in Listing 3 to leverage Protocol Buffers.

Listing 3: The main.cs file to leverage Protocol Buffers

use chrono::{NaiveDate, Duration, Utc};
use lapin::{options::*, types::FieldTable, BasicProperties, 
    Connection, ConnectionProperties};
use prost::Message;
use rand::Rng;
use tokio::time::{sleep, Duration as TokioDuration};
use uuid::Uuid;

pub mod order {
  include!(concat!(env!("OUT_DIR"), "/order.rs"));
}

use order::Order;

fn generate_random_date(start: NaiveDate, end: NaiveDate) -> String {
    let days_between = (end - start).num_days() as u32;
    let random_days = rand::thread_rng().gen_range(0..=days_between);
    (start + Duration::days(random_days as i64)).to_string()
}

async fn initialize(addr: &str, queue_name: &str) -> lapin::Channel {
    let connection = Connection::connect(addr, 
      ConnectionProperties::default())
        .await
        .expect("Error: Connection failed…");

    let channel = connection.create_channel()
        .await
        .expect("Error: Failed to create channel");

    channel.queue_declare(queue_name, QueueDeclareOptions::default(), 
      FieldTable::default())
        .await
        .expect("Error: Failed to declare queue");

    channel
}

#[tokio::main]
async fn main() {
    let addr = "amqp://guest:guest@localhost:5672//";
    let queue_name = "order_queue";
    let channel = initialize(addr, queue_name).await;

    let statuses = ["Pending", "Processing", "Shipped", "Delivered", 
      "Cancelled"];
    let today = Utc::now().naive_utc().date();
    let start_date = NaiveDate::from_ymd_opt(2025, 1, 1).unwrap();
    let total_records = 10;

    for _ in 0..total_records {
        let order = Order {
            order_id: Uuid::new_v4().to_string(),
            product_id: rand::thread_rng().gen_range(1..=10),
            customer_id: rand::thread_rng().gen_range(1..=5),
            order_date: generate_random_date(start_date, today),
            status: statuses[rand::thread_rng()
              .gen_range(0..statuses.len())].to_string(),
        };

        let mut buf = Vec::with_capacity(order.encoded_len());
        order.encode(&mut buf).expect("Error: Failed to encode order");

        channel.basic_publish("", queue_name, BasicPublishOptions::default(), 
          &buf, BasicProperties::default())
            .await
            .expect("Error: Failed to publish message")
            .await
            .expect("Error: Failed to confirm message");

        println!("Sent order data to queue: {:?}", order);

        sleep(TokioDuration::from_millis(50)).await;
    }
}

Step 6: Build and Run

Finally, build and run the project by executing the following commands at the Terminal Window in Visual Studio Code.

cargo clean
cargo build
cargo run

Install NuGet Package(s)

So far so good. The next step is to install the necessary NuGet Package(s). To install the required packages into your project, right-click on the solution and the select Manage NuGet Packages for Solution…. Now search for the packages named HotChocolate.AspNetCore, and HotChocolate.AspNetCore.Playground in the search box and install them one after the other. Alternatively, you can type the commands shown below at the NuGet Package Manager Command Prompt:

PM> Install-Package protobuf-net
PM> Install-Package RabbitMQ.Client

Alternatively, you can install these packages by executing the following commands at the Windows Shell:

dotnet add package protobuf-net
dotnet add package RabbitMQ.Client

Create a Minimal API Using ASP.NET Core and C#

In this section, you'll implement a Minimal API that retrieves order records from RabbitMQ and displays it using a Blazor front-end. The source code of this application is comprised of the following classes and interfaces:

  • Order class
  • IOrderRepository interface
  • OrderRepository class
  • IOrderService interface
  • OrderService class

Create the Order Class

Create a new class named Order in a file having the same name with a .cs extension and write the following code in there:

[ProtoContract]
public class Order
{
    [ProtoMember(1)]
    public string OrderId
    { 
        get; set; 
    } = "";

    [ProtoMember(2)]
    public uint ProductId
    { 
        get; set; 
    }

    [ProtoMember(3)]
    public uint CustomerId
    { 
        get; set; 
    }

    [ProtoMember(4)]
    public string OrderDate
    { 
        get; set; 
    } = "";

    [ProtoMember(5)]
    public string Status
    { 
        get; set; 
    } = "";
}

Create the Order Repository Class

You'll create a repository to retrieve data from the queue you created earlier using Rust. The following code snippet shows how you can declare the interface and the implementation types of this repository. The IOrderRepository interface contains the declaration of the GetAllOrdersAsync method, as shown below:

public interface IOrderRepository
{
    Task<IEnumerable<Order>> GetAllOrdersAsync();
}

The OrderRepository class implements the IOrderRepository interface, as shown in the following piece of code:

public class OrderRepository : IOrderRepository
{
    private readonly ConcurrentBag<Order> _orders = new();

    public Task<IEnumerable<Order>> GetAllOrdersAsync() => 
       Task.FromResult<IEnumerable<Order>>(_orders);
}

Create the Order Service Class

Create a new interface named IOrderService in a file having the same name and replace the auto-generated code with the following piece of code:

public interface IOrderService
{
    Task<IEnumerable<Order>> FetchOrdersAsync();
}

The OrderService class should implement the IOrderService interface, as shown in the code snippet below:

public class OrderService : IOrderService, IDisposable
{

}

The FetchOrdersAsync method will be used to retrieve data from the queue.

public async Task<IEnumerable<Order>> FetchOrdersAsync()
{
   var fetchedOrders = new List<Order>();
   
    if (_channel == null)
    throw new InvalidOperationException("Error: Channel not initialized.");

    while (true)
    {
        var result = await _channel.BasicGetAsync(
            QueueName, autoAck: false);
   
        if (result == null
        break;

        using var ms = new MemoryStream(result.Body.ToArray());
        var order = Serializer.Deserialize<Order>(ms);
            fetchedOrders.Add(order);
     
        _repository.AddOrder(order);
        await _channel.BasicAckAsync(result.DeliveryTag, false);
    }
   
    return fetchedOrders;
}

The complete source code of the OrderService class is given in Listing 4.

Listing 4: The OrderService class

public class OrderService :
    IOrderService, IDisposable
{
    private readonly IOrderRepository _repository;
    private readonly IConnection _connection;
    private readonly IChannel _channel;
    private const string QueueName = "order_records";

    public OrderService(IOrderRepository repository)
    {
        _repository = repository;

        var factory = new ConnectionFactory
        {
            HostName = "localhost",
            UserName = "guest",
            Password = "guest"
        };

        _connection = factory.CreateConnectionAsync().Result;
        _channel = _connection.CreateChannelAsync().Result;
        _channel.QueueDeclareAsync(QueueName, durable: true, 
          exclusive: false, autoDelete: false, arguments: null);
    }

    public async Task<IEnumerable<Order>> FetchOrdersAsync()
    {
        var fetchedOrders = new List<Order>();

        if (_channel == null)
            throw new InvalidOperationException(
              "RabbitMQ channel is not initialized.");

        while (true)
        {
            var result = await _channel.BasicGetAsync(
              QueueName, autoAck: false);
            if (result == null)
                break;

            using var ms = new MemoryStream(result.Body.ToArray());
            var order = Serializer.Deserialize<Order>(ms);
            fetchedOrders.Add(order);
            _repository.AddOrder(order);
            await _channel.BasicAckAsync(result.DeliveryTag, false);
        }

        return fetchedOrders;
    }

    public void Dispose()
    {
        _channel?.CloseAsync();
        _connection?.CloseAsync();
    }
}

Register the Instances with IServiceCollection

The following code snippet illustrates how you can register the repository and service instances to the IServiceCollection:

// Register dependencies
builder.Services.AddScoped<IOrderRepository, OrderRepository>();
builder.Services.AddScoped<IOrderService, OrderService>();

Create the Endpoint in the Program.cs File

The following piece of code shows how you can create a minimal endpoint to fetch and return all orders available in the RabbitMQ queue:

app.MapGet("/orders", async (IOrderService orderService) =>
{
    var orders = await orderService.FetchOrdersAsync();
    return Results.Ok(orders);
});

The complete source code of the Program.cs file is given in Listing 5.

Listing 5: The Program.cs file

using System.Collections.Concurrent;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using ProtoBuf;
using RabbitMQ.Client;

var builder = WebApplication.CreateBuilder(args);

// Register dependencies
builder.Services.AddScoped<IOrderRepository, OrderRepository>();
builder.Services.AddScoped<IOrderService, OrderService>();

var app = builder.Build();

app.MapGet("/orders", async (IOrderService orderService) =>
    {
        var orders = await orderService.FetchOrdersAsync();
        return Results.Ok(orders);
    }
);

app.Run();

Execute the Application

Finally, right-click on the solution file in the Solution Explorer to invoke the Property Pages window. In this window, configure the application to set all the projects as startup projects. Finally, when you run the .NET Aspire application, the order records will be displayed using Blazor.

I'll skip any discussion of how to build the user interface with Blazor here, because it's out of scope for this article. You can take a look at an earlier post on Blazor here: https://www.codemag.com/Article/2503041/Building-Modern-Web-Applications-Using-Blazor-ASP.NET-Core

Takeaways

Here are the key takeaways at a glance:

  • Polyglot architectures address the shortcomings of traditional distributed architectures by enabling you to leverage a conglomeration of technologies to build your application.
  • Rust is a great choice for building performance-critical applications.
  • The synergy between Rust and .NET to build cloud-native ecosystems with polyglot microservices is well suited in distributed applications.