Dev Containers for .NET in VS Code: A Beginner‑Friendly Guide That Actually Works
Essentially, Dev Containers allow you to utilise a Docker container as your development setting within VS Code.
However, the main takeaway isn’t simply “Docker for development.”
It’s about relocating all the complexities of your development environment away from your laptop and into a version-controlled setup.
When you use Dev Containers:
- Your laptop functions solely as a VS Code client.
- All your tools, SDKs, runtimes, and dependencies are contained within the Docker container.
- Your project dictates its own development environment, leaving your machine untouched.
This setup has several benefits:
- You can shift between projects without any risk of breaking things.
- Deleting and recreating your environment is completely safe.
- New developers can onboard seamlessly, without needing any insider knowledge.
Getting started with .NET development may seem straightforward at first glance, but it can quickly become complicated.
Here are some common issues:
- Different team members using varied .NET SDK versions.
- One project might require .NET 6, whereas another needs .NET 8.
- Native dependencies that function on one computer may fail on another.
- CI processes operating on Linux while developers work on Windows.
Dev Containers can help resolve these challenges by:
- Ensuring the SDK version and operating system used for development are locked down.
- Running everything inside a Linux container, closely mirroring the CI and production environments.
- Preventing developer machines from becoming cluttered or unstable.
- Making onboarding almost instantaneous: just clone, reopen in the container, and run.
Once the .devcontainer folder is present in your repository, the environment becomes a core part of your codebase rather than just another document.
You don’t need to be an expert in Docker to make use of Dev Containers.
Here’s a straightforward way to understand how it works:
- Your repository includes a .devcontainer folder.
- Inside this folder, the devcontainer.json file defines your development environment.
- VS Code reads this file and initiates a container.
- VS Code then connects to this container and runs the necessary extensions within it.
Your source code stays stored on your machine, but:
- The terminal operates inside the container.
- The debugger functions inside the container.
- All SDKs are contained within the container.
If something goes wrong, you just rebuild the container rather than your whole laptop.
Dev Containers work particularly well in scenarios where:
- You’re juggling multiple projects with differing requirements.
- Your team faces challenges with consistent environments.
- You’re aiming for parity with Linux for CI and containerised deployments.
- You prefer a reproducible environment over an ad-hoc local configuration.
However, they might not be the best option if:
- You’re only developing very small, disposable scripts.
- You rely heavily on Windows-specific tools.
- You can’t use Docker within your environment at all.
For most professional .NET teams, the advantages significantly outweigh any disadvantages.
When starting with Dev Containers on Windows, a key decision is determining how Docker will run on your machine. Both Docker Desktop and Docker Engine running inside WSL are effective with Dev Containers but cater to slightly different needs.
Docker Desktop is a simple and user-friendly option for launching Dev Containers.
Pros include:
- Quick setup requiring minimal configuration.
- Offers a graphical dashboard for managing containers, images, and logs.
- Works well with VS Code and WSL2.
- Simplifies troubleshooting for beginners.
Cons are:
- It consumes more system resources in the background.
- Runs additional services even when you’re not actively developing.
- Some enterprise environments may have restrictions or licensing differences.
Consider using Docker Desktop if:
- You’re new to Docker or Dev Containers.
- You want the quickest and most straightforward setup.
- You prefer ease of use over detailed control.
- You’re working on personal projects or in environments where Docker Desktop is permitted.
For most developers starting with Dev Containers, Docker Desktop is the suggested entry point.
This method installs Docker Engine directly in a Linux distribution, such as Ubuntu, operating on WSL2 without Docker Desktop.
Pros include:
- Lower resource consumption compared to Docker Desktop.
- Behaviour that’s native to Linux, making it closer to CI and production.
- No dependency on Docker Desktop.
- Often favoured in enterprise or constrained environments.
Cons are:
- Requires manual setup and configuration.
- Demands basic Linux and WSL familiarity.
- No graphical interface, everything is command-line based.
Consider using Docker Engine in WSL if:
- Docker Desktop is prohibited or limited.
- You prefer a leaner, Linux-first workflow.
- You primarily work in WSL.
- You desire tighter control over your Docker setup.
This method is ideal once you’re comfortable using Docker and WSL.
Note: Avoid using both Docker Desktop and Docker Engine in WSL at the same time. Choose one approach and stick with it.
Operating both concurrently often leads to confusion with Docker contexts and unpredicted failures with Dev Containers, even when the configuration appears correct.
If you’re employing Linux containers with WSL, keep your code within the WSL filesystem.
Recommended: /home//projects/your-repo
Avoid: /mnt/c/Users//your-repo
Accessing Windows files from Linux containers can slow down performance and cause file-watching problems, whereas having the repo in WSL made my Dev Containers feel nearly native.
If you’re trying Dev Containers for the first time, follow these steps in order:
- Install Visual Studio Code.
- Install the Dev Containers extension.
- Set up Docker Desktop or Docker Engine in WSL.
- Clone your repository into the WSL filesystem.
- Open the folder in VS Code.
- Select “Dev Containers: Reopen in Container.”
That’s all there is to it. VS Code will take care of the rest.
Tech Stack:
- .NET 8 Web API
- PostgreSQL 16
- Entity Framework Core + Npgsql
- VS Code Dev Containers
- Docker Compose
my-blog-api/
├─ .devcontainer/
│ └─ devcontainer.json
├─ docker-compose.yml
└─ src/
└─ BlogApi/
├─ Program.cs
├─ BlogApi.csproj
├─ appsettings.json
├─ Models/
└─ Data/mkdir my-blog-api
cd my-blog-api
mkdir src && cd src
dotnet new webapi -n BlogApi
cd BlogApidotnet add package Npgsql.EntityFrameworkCore.PostgreSQL
dotnet add package Microsoft.EntityFrameworkCore.DesignCreate a docker-compose.yml file in the root of your repo:
version: "3.8"
services:
app:
image: mcr.microsoft.com/devcontainers/dotnet:1-8.0
volumes:
- .:/workspace:cached
working_dir: /workspace
command: sleep infinity
ports:
- "5000:5000"
depends_on:
- db
db:
image: postgres:16
environment:
POSTGRES_USER: devuser
POSTGRES_PASSWORD: devpwd
POSTGRES_DB: devdb
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: [email protected]
PGADMIN_DEFAULT_PASSWORD: admin
ports:
- "5050:80"
depends_on:
- db
volumes:
pgdata:Create the .devcontainer/devcontainer.json file:
{
"name": "dotnet-postgres-devcontainer",
"dockerComposeFile": "../docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
"shutdownAction": "stopCompose",
"customizations": {
"vscode": {
"extensions": [
"ms-dotnettools.csdevkit",
"ms-dotnettools.csharp",
"ms-azuretools.vscode-docker"
]
}
},
"postCreateCommand": "dotnet restore"
}Open the folder in VS Code and execute:
Dev Containers: Reopen in Container
Update the appsettings.json file as follows:
{
"ConnectionStrings": {
"DefaultConnection": "Host=db;Port=5432;Database=devdb;Username=devuser;Password=devpwd"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*"
}Using “Host=db” works because Docker Compose provides internal DNS between services.
namespace BlogApi.Models;
public class Post
{
public int Id { get; set; }
public string Title { get; set; } = string.Empty;
public string Content { get; set; } = string.Empty;
public DateTime CreatedUtc { get; set; } = DateTime.UtcNow;
}using BlogApi.Models;
using Microsoft.EntityFrameworkCore;
namespace BlogApi.Data;
public class BlogDbContext : DbContext
{
public BlogDbContext(DbContextOptions options) : base(options) { }
public DbSet Posts => Set();
}Replace the contents of Program.cs with:
using BlogApi.Data;
using BlogApi.Models;
using Microsoft.EntityFrameworkCore;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDbContext(options =>
options.UseNpgsql(builder.Configuration.GetConnectionString("DefaultConnection")));
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
// Apply migrations on startup (dev-only convenience)
using (var scope = app.Services.CreateScope())
{
var db = scope.ServiceProvider.GetRequiredService();
db.Database.Migrate();
}
app.UseSwagger();
app.UseSwaggerUI();
app.MapGet("/posts", async (BlogDbContext db) =>
await db.Posts.OrderByDescending(p => p.CreatedUtc).ToListAsync());
app.MapPost("/posts", async (Post post, BlogDbContext db) =>
{
db.Posts.Add(post);
await db.SaveChangesAsync();
return Results.Created($"/posts/{post.Id}", post);
});
app.MapPut("/posts/{id:int}", async (int id, Post input, BlogDbContext db) =>
{
var post = await db.Posts.FindAsync(id);
if (post is null) return Results.NotFound();
post.Title = input.Title;
post.Content = input.Content;
await db.SaveChangesAsync();
return Results.Ok(post);
});
app.MapDelete("/posts/{id:int}", async (int id, BlogDbContext db) =>
{
var post = await db.Posts.FindAsync(id);
if (post is null) return Results.NotFound();
db.Posts.Remove(post);
await db.SaveChangesAsync();
return Results.NoContent();
});
app.Run("http://0.0.0.0:5000");cd src/BlogApi
dotnet tool install --global dotnet-ef
export PATH="$PATH:/home/vscode/.dotnet/tools"
dotnet ef migrations add InitialCreate
dotnet ef database updatedotnet runOpen:
Issue | Symptom | Solution |
Mixing Docker setups | Random failures | Stick to only one Docker setup |
Code situated under /mnt/c | Slow builds | Relocate repo to WSL filesystem |
Docker not active | Container fails to start | Run “docker info” to check |
Failure to prune | Issues persist | Resolve daemon/context issues first |
- Multiple Docker engines operating simultaneously.
Docker Desktop alongside Docker Engine in WSL can lead to conflicts. - Unstable Docker CLI context, meaning the Docker CLI occasionally connected to different or broken Docker endpoints.
- Docker daemon showed as running yet was unusable, where Docker commands failed despite the daemon seeming active.
- Issues with systemd dependencies inside WSL, as Docker Engine relied on systemd, which may not be consistently active after WSL restarts.
- Dev Containers failing during setup, as VS Code’s Dev Containers exhibited failures during installation of features and builds.
- Inaccurate Docker error messages, which often masked the real underlying issues.
- Ineffective cache cleanup, where pruning images and containers failed to resolve the core daemon problems.
- Confusion due to container observability, even though PostgreSQL and pgAdmin functioned, there was a lack of clarity regarding container health, volumes, and data locations.
docker version
docker infoEnsure that only one server is displayed.
No mention of dockerDesktopLinuxEngine should be present when using native WSL Docker.
docker context ls
docker context show
docker context use defaultThe context should point to the desired daemon, whether it’s WSL or Desktop.
docker info
docker psThese commands should execute without API, 500, or version errors. Avoid proceeding if these fail.
cat /etc/wsl.conf
Expected:
[boot]
systemd=truewsl --shutdown
systemctl status dockersudo systemctl start docker
sudo systemctl enable dockerCheck the status with:
docker psPre-check Commands:
docker compose config
docker compose up -dYour configuration should be working before launching your Dev Container.
docker images
docker psdocker ps
docker inspect
docker logs
Port checks can be done with:
ss -lntp | grep - Only run these commands if your daemon is functioning properly.
docker system prune -f
docker volume prune -fwsl --shutdown# then restart WSL
sudo systemctl start docker
docker context show
docker info
docker compose up -d
code .# Only after these steps, open “Reopen in Container.”
docker ps
docker stop $(docker ps -q)
docker rm -f $(docker ps -aq)
docker psDev Containers transform local development from fragile, machine-specific setups to consistent, version-controlled environments.
With Dev Containers, Docker Compose, PostgreSQL, and pgAdmin, your entire .NET development stack is contained within those environments—not on your laptop. This keeps your SDKs, databases, and tools isolated, consistent, and easy to rebuild.
When issues arise, focus on rebuilding containers rather than tinkering with your laptops.
This method not only simplifies onboarding but also enhances Linux compatibility with CI, while eliminating the troublesome “works on my machine” dilemma. Once Docker is stable, Dev Containers prove to be one of the most dependable ways to create modern .NET applications.
- Dev Containers treat the development environment like code
- .NET, PostgreSQL, and pgAdmin run fully isolated inside containers
- pgAdmin offers clear visibility into database states and migrations
- Stable Docker is essential; Dev Containers are not a quick fix for Docker issues
- Onboarding becomes a breeze: clone → reopen in container → run
- Focus on rebuilding containers rather than machines
Share this content:
Discover more from Qureshi
Subscribe to get the latest posts sent to your email.