How I design APIs for long-Term maintainability in .NET - MediatR + CQRS + Result
When I build REST APIs in .NET, I like to use a clean and organized approach. In this article, I will explain how I use MediatR, CQRS, and FluentResults to handle errors. This is my opinionated way of doing it, and I will explain why I like it and the advantages it brings.
Why CQRS?
CQRS stands for Command and Query Responsibility Segregation. It may sound complex, but the idea is very simple:
- Commands are used to change things (create, update, delete)
- Queries are used to read things (get data)
By splitting them, you get many benefits:
- ✅ Each class does one thing, easier to understand
- ✅ Handlers are small and focused
- ✅ Easier to test
- ✅ No more big service classes
This approach helps keep your code clean and organized. Instead of having one big service class that does everything, you have small classes that handle specific tasks. This makes it easier to find and fix bugs, add new features, and understand how the code works.
Also it helps with merge conflicts in version control. When you have small, focused classes, changes are less likely to overlap, making it easier for teams to work together.
What is MediatR?
MediatR is just a small library that makes it easy to use CQRS in .NET. It helps send commands and queries without needing to know about the classes that handle them. You just send a command or query, and MediatR takes care of finding the right handler.
You can easily implement it by yourself but I like using MediatR becuase it makes it easy to add behavior like logging, validation, or caching. You can create “behaviors” that run before or after the handlers.
For example in my projects I use a DatabaseTransactionBehavior
that starts a transaction before the handler runs and commits it after. If the handler fails, it rolls back the transaction. This way, I can ensure that the database stays consistent without having to write transaction code in every handler.
You can get creative and add any behavior you want, like logging, validation, or caching. This makes the code cleaner and easier to maintain.
The Result type and why I use it
I am of the opinion that handling errors in a good way is one of the most important parts of building a modern and reliable software. Errors should be handled gracefully (not just throwing exceptions) and the user should get a good feedback (not just a stacktrace or a generic error 🤌).
Exceptions should be used for unexpected errors (like the name suggests), not for normal control flow. For example, if a user tries to create a resource that already exists, you should return a specific error message instead of throwing an exception.
Throwing exceptions for expected errors can lead to performance issues and makes it harder to understand the flow of the code. Instead, I use a Result
type that represents the outcome of an operation. Every method I have, even simple ones, returns a Result
object that indicates whether the operation was successful or not.
Just for convenience, I use the FluentResults
library, which provides a nice way to handle results and errors. It allows you to return a result that can either be successful or contain errors.
Example:
public async Task<Result<User>> GetUserAsync(Guid id)
{
var user = await _dbContext.Users.FindAsync(id);
// not an exception, just a normal control flow
// if the user is not found, return a failure result
// this way the caller can handle it gracefully
if (user == null)
{
// the frontend will get a clear message
return Result.Fail<User>("User not found");
}
return Result.Ok(user);
}
Controllers should be clean
In my controller, I don’t write manual logic for checking results. Instead, I use an extension method I wrote called ToActionResult()
.
This method converts the Result<T>
into the correct HTTP response, like:
- 200 OK for success
- 400 Bad Request if there’s a validation error
- 404 Not Found if the item is missing
So the controller stays very clean:
[HttpGet("{id}")]
public async Task<IActionResult> GetUser(Guid id)
{
var result = await _mediator.Send(new GetUserQuery(id));
return result.ToActionResult();
}
No manual checks, no if statements, just a clean and simple way to handle the response. This keeps the controller focused on routing and doesn’t clutter it with error handling logic.
Folder and project structure: keep it simple
One of the biggest challenges in large projects is organizing the code. When the application grows it can become hard to find files and understand the flow of the code.
I’ve seen many projects fall into overcomplicated folder structures. They create deep, abstract layers (like Core, Domain, Services, Application, BusinessLogic, etc.), often in different projects, which makes it hard to find things and understand the flow of the code, and in the end, you waste time just trying to decide where a piece of logic should go.
I prefer a simple and flat structure. I group files by feature and then by type. This means that all the files related to a specific feature (like users) are in one folder, making it easy to find everything related to that feature.
Example structure:
src/
├── application/
│ ├── Features/
│ │ ├── Users/
│ │ │ ├── Commands/
│ │ │ │ ├── CreateUserCommand.cs
│ │ │ │ ├── UpdateUserCommand.cs
│ │ │ │ ├── DeleteUserCommand.cs
│ │ │ ├── Queries/
│ │ │ │ ├── GetUserQuery.cs
│ │ │ ├── Models/
│ │ │ │ ├── CreateUserDto.cs
For project structure I have:
- A Web API project: for controllers and startup logic
- An application project: a plain C# class library with all the business logic
- A Entities project: just for the database entities
All my commands, queries, handlers, and models live in the application project.
Validation: checking what the user sends
I use FluentValidation
to validate user input. It allows me to define validation rules in a clean and organized way. I create a validator for each command or query, which checks the input before it reaches the handler.
This way, I can ensure that the data is valid before processing it, and if it’s not, I return a clear error message. Example of a validator:
public class CreateUserCommandValidator : AbstractValidator<CreateUserCommand>
{
public CreateUserCommandValidator()
{
RuleFor(x => x.Name).NotEmpty().WithMessage("Name is required");
RuleFor(x => x.Email).EmailAddress().WithMessage("Invalid email address");
}
}
I also create validators for DTOs (Data Transfer Objects) and models.
I am of the opinion that adding unecessary code is not a good idea, but validation is important. Even if I have a simple DTO, I still create a validator for it. This way, I can ensure that the data is valid before it reaches the application layer.
Testing: simple and reliable
Thanks to this structure, testing becomes very easy and natural. I mainly focus on integration tests that run against a real or in-memory database. These tests send full commands or queries through the system, just like the real application.
Because everything is separated and clean, I can test the full flow: validation, handler logic, database, and results. And since every handler returns a Result, it’s easy to check both success and failure cases. I can assert if the result is successful, check error messages, or verify that data was saved correctly. This makes my tests useful, realistic, and close to how the API really works.
I also write unit tests for individual handlers, but I focus on integration tests for the most part.
Conclusion: don’t be too strict
I believe it’s important to have a structure and follow an approach, especially as the project grows. But at the same time, we must be careful not to over-engineer everything.
Sometimes we become too strict with rules and folder structures. We add layers, interfaces, abstractions, and patterns everywhere — even when it’s not needed. The result? Slower development, harder debugging, and too much thinking about architecture instead of solving real problems.
My rule is simple
Be organized, but stay practical.
Use structure to help you, not to slow you down. If something doesn’t add real value, just leave it. Keep the code clean, not complicated.