AI-Assisted Greenfield Software Development, Part 5: Technology Guardrails

by John Miller | First published: February 27, 2026 - Last Updated: March 2, 2026

This is the fifth post in the series on AI-assisted greenfield software development. If you're joining mid-stream, you may want to begin with Part 1: Business Requirements, where I established the foundation for an AI-driven development workflow. Part 2, Part 3, and Part 4 expanded that foundation with scaffolding, project-level instructions, and Git workflow guidance.

In this post, I'll shift from architecture-level instructions to technology-level instructions. This is where I'll define the standards and practices that guide AI agents as they generate code—ensuring consistency, correctness, and architectural integrity across the entire solution.

Why Technology Instruction Files Matter

When humans build software, implementation standards are often implicit; shared understanding, code reviews, and experience enforce consistency. AI doesn't have that luxury. Without explicit technological rules, AI agents tend to optimize for convenience rather than coherence.

Instruction files solve this by providing:

  • Language-specific coding standards and conventions
  • Framework-specific implementation patterns
  • Technology integration guidelines
  • Code organization and folder structure
  • Anti-patterns and gotchas to avoid
  • Testing expectations and coverage targets
  • Quality standards and validation checklists

These files become the “rails” that keep AI-generated code aligned with your project's technical vision, ensuring consistency across all technologies in your stack.

Updating the Project Overview

Before diving into CQRS implementation details, it's important to ensure that our project overview accurately reflects the current state of our architectural decisions. As the project evolves and we make technology choices, we need to keep our foundational documentation current.

To update the project overview with our current technology stack and architectural direction, I ran the following prompt:

Submit this prompt #file:create-project-overview.prompt.md with these argument values:
project_name: zeus.academia
project_type: web app
primary_language: C#, TypeScript
key_technologies: Vue 3, Pinia, REST, GraphQL, Vite, Azure Static Web Apps, ASP.NET Core, MediatR, 
                  FluentValidation
project_purpose: Academic Management System

This prompt generates or updates comprehensive project documentation that includes:

  • Project identity and purpose with clear positioning as an academic management system
  • Technology stack overview documenting our chosen frameworks and tools
  • Architecture approach reflecting our Command Query Responsibility Segregation (CQRS) implementation strategy
  • Development workflow aligned with our AI-assisted methodology
  • Integration patterns between frontend (Vue 3) and backend (ASP.NET Core) components

Running this prompt ensures that anyone joining the project-whether human developers or AI agents-has an accurate, up-to-date understanding of what we're building and how we're building it. This foundational clarity becomes especially important as we implement more complex architectural patterns like CQRS.

Understanding the Project Overview Instructions File to Guide AI-Assisted Development

The project-overview.instructions.md file serves as the foundational context document for the Zeus Academia project, providing AI assistants with essential project information needed to work effectively across the entire codebase. This instruction file is marked to apply globally (applyTo: "**"), meaning it's automatically loaded as context whenever you interact with any file in the repository. It acts as a “project primer” that ensures AI assistants understand the project's identity (Academic Management System), architecture (monorepo with Vue 3 frontend and ASP.NET Core backend), technology stack (C#, TypeScript, MediatR, Pinia, FluentValidation), and core constraints (OAuth 2.0 authentication, WCAG 2.1 AA accessibility, FERPA/GDPR compliance, 80% test coverage).

The file is structured to provide quick, token-optimized reference information across several key dimensions:

  • Architecture: Explains the monorepo structure, separation of frontend/backend, and key directories.

  • Standards: Defines coding conventions (PascalCase for C#, camelCase for TypeScript), patterns (CQRS via MediatR, Repository pattern), and testing requirements (xUnit, Vitest, 80% coverage).

  • Environment: Specifies deployment targets (Azure Static Web Apps, Azure App Service), runtime expectations (modern browsers, .NET 8+), and dependencies (Azure AD B2C, Azure SQL/Cosmos DB).

  • Constraints: Sets non-negotiable requirements for security (HTTPS-only, RBAC), performance (API <500ms p95, page load <3s), accessibility (WCAG 2.1 AA), privacy (FERPA/GDPR), and scale (10k+ concurrent users).

  • Critical Files: Points to other essential instruction files for detailed guidance on CQRS implementation, C# coding standards, AI workflows, and Git processes.

This overview file is referenced by all technology-specific instruction files (Vue 3, ASP.NET Core, MediatR, etc.) to maintain consistency and avoid duplication. It ensures that any AI-assisted code generation, documentation, or refactoring work aligns with the project's technical standards, architectural decisions, and compliance requirements. The file itself was AI-generated (as indicated by the YAML front matter) and follows the repository's AI provenance policy, including complete metadata about its creation, the prompt used, task durations, and links to the conversation log.

Creating Technology-Specific Instruction Files

With our project overview updated to reflect the current technology stack, the next step is to create detailed instruction files for each technology we'll be implementing. Rather than manually crafting these files, we can leverage AI to generate comprehensive, consistent instruction files tailored to our specific technology choices.

To generate technology-specific instruction files for all the technologies documented in our project overview, I submitted this prompt:

Create a prompt file that creates technology-specific instruction files for the technologies specified 
in the #file:project-overview.instructions.md

This prompt creates a meta-prompt file that can generate individual instruction files for each technology in our stack, including:

  • Frontend technologies - Vue 3, Pinia, Vite configuration and development patterns
  • Backend frameworks - ASP.NET Core, MediatR, FluentValidation implementation standards
  • API patterns - REST and GraphQL service design and implementation guidelines
  • Deployment targets - Azure Static Web Apps configuration and CI/CD workflows
  • Integration approaches - How these technologies work together within our CQRS architecture

This systematic approach ensures that each technology instruction file maintains consistency with our overall architectural vision while providing the specific implementation guidance that AI agents need to generate quality code. The resulting instruction files will serve as the detailed technical foundation that bridges our high-level architecture decisions with actual code generation.

Implementing Technology-Specific Instructions

With the meta-prompt file created, we can now systematically generate instruction files for each technology in our project stack. Rather than creating all instruction files at once, the systematic approach involves submitting the technology instruction prompt for each technology individually, allowing for focused, detailed guidance tailored to each component.

To generate comprehensive instruction files for each technology documented in our project overview, I submitted this prompt:

Submit the #file:create-technology-instructions.prompt.md for each of the technologies specified 
in the #file:project-overview.instructions.md file

This approach creates individual instruction files for each technology as described below.

Frontend Technology Instructions

  • Vue 3 implementation patterns including component composition, reactivity patterns, and lifecycle management
  • Pinia state management with store organization, state mutations, and persistence strategies
  • Vite configuration for development workflows, build optimization, and asset handling
  • Frontend integration patterns defining how these technologies work together cohesively

Backend Technology Instructions

  • ASP.NET Core service patterns covering controller design, dependency injection, and middleware configuration
  • MediatR implementation standards for command/query handling, pipeline behaviors, and cross-cutting concerns
  • FluentValidation integration with validation rules, error handling, and localization support
  • API design guidelines for both REST and GraphQL implementations

Infrastructure Integration Instructions

  • Azure Static Web Apps deployment including continuous integration/continuous deployment (CI/CD) pipeline configuration and environment management
  • Cross-application communication patterns between frontend and backend services
  • Authentication and authorization strategies that work across the technology stack
  • Performance optimization techniques specific to each technology layer

Quality Assurance Instructions

  • Testing strategies tailored to each technology (Jest for Vue 3, xUnit for ASP.NET Core)
  • Code quality standards including linting rules, formatting conventions, and architectural compliance
  • Integration testing patterns that validate cross-technology communication
  • Documentation requirements for API endpoints, component interfaces, and deployment procedures

This systematic generation process ensures that each technology instruction file maintains consistency with our overall architectural vision while providing the specific, actionable guidance that AI agents need to generate high-quality, pattern-compliant code. The resulting instruction files serve as the comprehensive technical foundation that connects our architectural decisions to actual implementation.

Executing the Technology Instructions Generation

With the meta-prompt file in place, the next step involves actually generating the individual instruction files for each technology in our stack. This process transforms our high-level technology choices into concrete, actionable guidance that AI agents can follow consistently.

To execute the technology instruction generation process, I submitted this specific prompt:

Submit the #file:create-technology-instructions.prompt.md for each of the technologies specified 
in the #file:project-overview.instructions.md file

This prompt directs the AI to:

  1. Read the project overview file to identify all technologies in the current stack
  2. Apply the technology instruction template to each identified technology
  3. Generate individual instruction files with technology-specific implementation guidelines
  4. Ensure consistency across all generated instruction files while maintaining technology-specific focus

The execution results in a comprehensive set of instruction files, each tailored to a specific technology but aligned with the overall project architecture. For example:

File Purpose
aspnetcore-implementation.instructions.md ASP.NET Core coding standards and patterns for backend development
csharp-implementation.instructions.md Modern C# coding standards and patterns for backend development
fluentvalidation-implementation.instructions.md Validation patterns and best practices with FluentValidation
mediatr-implementation.instructions.md Command/query handling and pipeline behaviors with MediatR
pinia-implementation.instructions.md State management patterns and best practices for Pinia
typescript-frontend-implementation.instructions.md TypeScript coding standards and patterns for frontend development
vitest-implementation.instructions.md Unit testing patterns and best practices with Vitest
vue3-implementation.instructions.md Component patterns, composition API usage, and Vue 3 best practices
xunit-implementation.instructions.md Unit testing patterns and best practices with xUnit

Each generated instruction file contains specific implementation rules, code templates, anti-pattern prevention, and integration guidance that ensures AI agents produce consistent, high-quality code aligned with our architectural vision.

aspnetcore-implementation.instructions.md

The ASP.NET Core implementation standards instruction file establishes comprehensive guidelines for building the backend API layer of the Zeus Academia Academic Management System using ASP.NET Core 8.0+. It mandates a clean architecture approach with minimal APIs for simple endpoints and MVC controllers for complex logic, while enforcing async/await patterns, constructor-based dependency injection, and nullable reference types throughout.

The file provides detailed code examples for minimal API endpoints, controller implementations, Program.cs configuration, exception handling middleware, and dependency injection setup-all integrated with MediatR for CQRS patterns and FluentValidation for request validation. It includes a validation checklist covering async operations, HTTP status codes, logging, cancellation tokens, authorization, and health checks, while explicitly calling out anti-patterns like synchronous I/O, Task.Result/Task.Wait(), service locator patterns, business logic in controllers, and suppressing nullable warnings.

The standards ensure that all backend code follows consistent patterns for Azure AD B2C authentication, Entity Framework Core data access, proper error handling with ProblemDetails responses, and comprehensive OpenAPI documentation for all endpoints.

At this point, the AI has:

  • C# implementation standards instruction file
  • FluentValidation implementation standards instruction file
  • MediatR CQRS implementation standards instruction file
  • Pinia state management standards instruction file
  • TypeScript frontend implementation standards instruction file
  • Vitest testing standards instruction file
  • Vue 3 implementation standards instruction file
  • xUnit testing standards instruction file

Details are described below.

csharp-implementation.instructions.md

The C# Implementation standards instruction file establishes comprehensive modern C# coding guidelines for the Zeus Academia project, mandating .NET 8+ with nullable reference types enabled, file-scoped namespaces, and strict adherence to SOLID principles and Clean Architecture patterns.

It provides detailed naming conventions using PascalCase for public members, camelCase for private fields, and Async suffixes for asynchronous methods, while enforcing one-type-per-file organization with constructors, properties, and methods in a specific order matching namespace-to-folder structure. The file extensively documents type selection criteria favoring record types for immutable data transfer objects (DTOs) and value objects with structural equality, class types for mutable entities with identity equality, and modern C# 11+ features including required members, raw string literals, list patterns, and expression-bodied members for concise single-statement implementations.

It mandates async/await patterns for all I/O operations with proper CancellationToken propagation while strictly prohibiting blocking calls like .Result or .Wait(), requires constructor-based dependency injection over service locator patterns, enforces specific exception types over generic Exception, and provides LINQ optimization guidelines preferring method syntax, Any() over Count() > 0, and avoiding unnecessary materialization with .ToList().

The standards include comprehensive XML documentation requirements for public APIs, a detailed anti-patterns table highlighting common mistakes like primitive obsession and mutable static state, and a validation checklist ensuring compiler warnings are resolved, nullable reference types properly handled, and all code follows consistent patterns-serving as the foundational C# guidance that specialized instruction files like CQRS and MediatR implementations build upon.

fluentvalidation-implementation.instructions.md

The FluentValidation implementation standards instruction file establishes comprehensive validation patterns for the Zeus Academia backend, mandating one validator per MediatR command or query with FluentValidation 11.0+ to enforce declarative, fail-fast request validation in the CQRS pipeline.

It provides detailed code examples for basic command validators with chained validation rules (NotEmpty, MaximumLength, Matches), conditional validation using When() for context-specific rules, async validators with MustAsync for database uniqueness checks and existence verification, nested object validation using SetValidator() for complex types, and collection validation with RuleForEach() for batch operations.

The file demonstrates custom error message formatting with property names and values, error code assignment for API responses, reusable validation rule extensions for common patterns like GUID validation and alphanumeric checks, and performance optimization techniques including compiled regex caching and strategic ordering of cheap-before-expensive validations. It explicitly prohibits anti-patterns like validation logic in handlers, synchronous database checks blocking the pipeline, mixing business rules with input validation, throwing custom exceptions from validators, and generic unhelpful error messages-instead requiring stateless validators injected via DI, async operations with proper cancellation token support, separation of validation (input correctness) from business rules (domain logic), and specific actionable error messages that guide users toward correct input, all integrated automatically through the MediatR ValidationBehavior pipeline that collects all validation failures before executing handlers.

mediatr-implementation.instructions.md

The MediatR CQRS implementation standards instruction file establishes comprehensive patterns for implementing the Command Query Responsibility Segregation (CQRS) pattern in the Zeus Academia backend using MediatR 12.0+. It enforces strict separation between commands (write operations) and queries (read operations) through immutable record types, with one handler per request following single responsibility principles.

The file provides detailed code examples for command patterns (create, update, delete), query patterns (single entity, paginated lists with filtering), void commands returning Unit, and pipeline behaviors for cross-cutting concerns like FluentValidation integration and structured logging with performance metrics.

It includes dependency injection setup that automatically registers all handlers and validators from the application assembly, establishes clear naming conventions (e.g., Create<Entity>Command, Get<Entity>Query, <Command>Handler), and demonstrates integration with repositories, unit of work transactions, and domain entities.

The standards explicitly prohibit anti-patterns like mutable commands/queries, business logic in pipeline behaviors, queries that modify state, returning domain entities directly to APIs, and validation logic embedded in handlers-instead mandating the use of FluentValidation validators in the pipeline, repository abstractions for data access, DTO projections for API responses, and proper cancellation token propagation throughout async operations to ensure scalable, maintainable, and testable CQRS implementations.

pinia-implementation.instructions.md

The Pinia state management standards instruction file establishes comprehensive guidelines for implementing global state management in the Zeus Academia Vue 3 frontend using Pinia 2.1+ with TypeScript, mandating setup stores (composition API style) exclusively while avoiding the options API altogether. It provides extensive code examples for standard CRUD store structure with reactive state using ref(), computed getters for derived values, async actions with loading/error states, and the $reset() function for cleanup, along with patterns for store composition to combine multiple stores for related functionality, manual state persistence for sensitive data like auth tokens, and optimistic updates with rollback for improved perceived performance.

The file documents integration patterns including component usage with useXxxStore(), router guard integration for auth checks, and test patterns using setActivePinia() with Vitest for isolated store testing with mocked API calls. It establishes strict naming conventions (use<Feature>Store.ts), file organization under src/frontend/stores/, and type safety requirements ensuring all state, getters, and actions have explicit TypeScript types, while explicitly prohibiting anti-patterns like options API stores, direct state mutations from components, storing derived values instead of using computed getters, mixing unrelated concerns in one store, async logic in getters, persisting loading states, and creating store instances outside component setup-instead requiring focused domain-specific stores, action-based state modifications, selective data persistence, and proper reactive composition to ensure maintainable, performant, and type-safe global state management that leverages Vue 3's reactivity system and Pinia's modern composition-based architecture.

typescript-frontend-implementation.instructions.md

The TypeScript frontend implementation standards instruction file establishes comprehensive type-safety guidelines for the Zeus Academia Vue 3 SPA frontend, mandating TypeScript 5.3+ with strict mode enabled and zero tolerance for implicit any types.

It provides detailed patterns for domain model interfaces, API request/response types, type guards for runtime validation, discriminated unions for type-safe error handling, and utility types for common transformations like DeepPartial and RequireAtLeastOne. The file includes extensive code examples for Vue 3 integration-specifically typed component props using defineProps<T>(), composable return types, and Pinia store interfaces-while also covering TSConfig settings that enforce strict linting rules including noUnusedLocals, noUnusedParameters, and noImplicitReturns.

It explicitly calls out anti-patterns like using any, unchecked type assertions with as, excessive optional chaining, numeric enums, and @ts-ignore comments, instead promoting type guards, readonly modifiers, string enums or union types, and proper null handling through type narrowing to ensure the frontend codebase maintains complete type safety and leverages TypeScript's full compile-time guarantees.

vitest-implementation.instructions.md

The Vitest testing standards instruction file establishes comprehensive testing guidelines for the Zeus Academia Vue 3 frontend, mandating Vitest 1.2+ with 80%+ code coverage through clear Arrange-Act-Assert patterns, descriptive test naming that explains behavior rather than implementation, and fast execution times measured in milliseconds for unit tests.

It provides extensive code examples for unit testing composables with vi.mock and store mocking, component testing using @testing-library/vue with proper user-facing queries and event simulation, Pinia store testing with setActivePinia and isolated state, API mocking with axios interception, async component testing with waitFor patterns, and comprehensive setup configurations including vitest.config.ts with jsdom environment and @testing-library/jest-dom matchers.

The file documents all standard assertion patterns from basic equality checks through Testing Library's accessibility-focused matchers like toBeInTheDocument() and toHaveTextContent(), while establishing strict validation requirements including mock cleanup with beforeEach(() => vi.clearAllMocks()), async/await over setTimeout delays, and proper test isolation. It explicitly prohibits anti-patterns including testing implementation details instead of user behavior, shallow rendering without justification, hardcoded test data duplication, generic uninformative test names, and multiple unrelated assertions per test-instead requiring focused behavioral tests, full component renders with Testing Library, test data factories for reusable fixtures, descriptive test names that serve as documentation, and single-concern tests that validate one logical behavior to ensure maintainable, reliable test suites that validate frontend functionality through user-centric testing patterns aligned with Testing Library philosophy.

vue3-implementation.instructions.md

The Vue 3 implementation standards instruction file establishes comprehensive guidelines for building the Zeus Academia frontend SPA using Vue 3.4+ with TypeScript, mandating exclusive use of the Composition API with <script setup> syntax while completely avoiding the Options API. It provides extensive code examples for Single File Component (SFC) structure with typed props using defineProps<T>(), emits declaration with defineEmits<{}>(), composable patterns prefixed with use that encapsulate reusable logic with proper TypeScript return types, and template refs with explicit type annotations for DOM element and component instance access.

The file documents integration patterns with Pinia stores via useXxxStore() composables, Vue Router navigation hooks, and performance optimization techniques including lazy-loaded routes with () => import(), v-once for static content, v-memo for expensive list rendering, and computed properties over methods for derived reactive state. It establishes strict lifecycle hook usage patterns with cleanup requirements in onUnmounted for subscriptions and event listeners, validation checklists ensuring all components use scoped styles and handle async operations with loading/error states, and explicitly prohibits anti-patterns like untyped props, prop mutations, global state in composables, missing :key attributes in v-for loops, and inline event handler creation that causes unnecessary re-renders — all while maintaining strict TypeScript typing throughout to ensure type-safe, performant, and maintainable Vue 3 components that leverage reactivity primitives (ref, reactive, computed) and lifecycle hooks correctly for optimal rendering performance and developer experience.

xunit-implementation.instructions.md

The xUnit Testing standards instruction file establishes comprehensive testing guidelines for the Zeus Academia backend, mandating xUnit 2.6+ with a target of 80%+ code coverage through clear Arrange-Act-Assert test structure, descriptive naming conventions that explain the scenario and expected result, and one logical assertion per test for focused validation. It provides extensive code examples for unit tests using NSubstitute for mocking dependencies, theory tests with InlineData and MemberData for parameterized testing, test fixtures with IClassFixture for shared setup across test classes, integration tests using WebApplicationFactory to test actual HTTP endpoints, and repository tests with in-memory Entity Framework databases for data access validation.

The file demonstrates FluentAssertions usage for readable, expressive assertions covering objects, collections, strings, exceptions, and async operations, while establishing strict naming patterns like MethodName_Scenario_ExpectedResult (e.g., Handle_WithValidCommand_CreatesStudent). It explicitly prohibits anti-patterns including multiple unrelated assertions in one test, implementation detail testing instead of behavior validation, test interdependencies, hardcoded test data duplication, async void test methods, and production database usage in tests-instead requiring isolated independent tests with test data builders, async Task signatures for async operations, specific FluentAssertions over generic Assert.True, and in-memory databases or test containers to ensure fast (<100ms unit, <1s integration), reliable, and maintainable test suites that validate behavior rather than implementation while maintaining clear separation between unit tests (Zeus.Academia.UnitTests), integration tests (Zeus.Academia.IntegrationTests), and API tests (Zeus.Academia.Api.Tests).

Validating the Governance Context for Conflicts

Before moving on, it's a good idea to review the .github governance (prompts, instructions, etc.) to ensure there aren't conflicting governance rules that could confuse Microsoft Copilot. Conflicts could arise from overlapping instructions, contradictory guidelines, or inconsistent priorities, defined in the instruction files. In Part 2 I introduced the check-context.prompt.md file that examines the current context and reports any anomalies that should be addressed.

The prompt checks for conflicting instructions, factual inconsistencies, logical contradictions, scope/priority conflicts, technical incompatibilities, communication gaps, and duplication/redundancy.

This is a prompt that should be submitted any time the .github governance changes. Copilot didn't find any issues to address, so we are good to proceed.

What's Next?

Now that we've set the standards for the technical implementation, we're almost ready to begin the implementation. But first we need an implementation plan which defines the tasks, their dependencies, and the order in which they should be executed to build the solution. It serves as a roadmap for the development process, ensuring that all necessary steps are taken in the correct sequence to achieve the desired outcome.

In Part 6, I'll introduce the vertical slice implementation pattern which breaks down the project into small, end-to-end slices that can be developed and tested independently. I'll also introduce an implementation plan prompt and show how to use AI to generate a detailed implementation plan that breaks down the project into manageable tasks while respecting the architectural constraints we've established. We'll finish this phase by creating implementation prompts that guide AI agents through the actual coding work, ensuring that all generated code adheres to our technology-specific instruction files and architectural vision.

If you'd like to explore the files or follow along with your own implementation, everything is available in the Academia GitHub repository. Fork it, experiment, and adapt it to your own workflow.


Feedback Loop

Feedback is always welcome. Send your thoughts to john.miller@codemag.com.

Disclaimer

AI contributed to the writing of this post, but humans reviewed it, refined it, enhanced it, and gave it soul.

Prompts:

  • Rewrite this blog post focusing on technology instruction files
  • After the section “## Why Architecture Instruction Files Matter” add a section describing running the following prompt to update the project overview:
    Submit this prompt #file:create-project-overview.prompt.md with these argument values:
    project_name: zeus.academia
    project_type: web app
    primary_language: C#, TypeScript
    key_technologies: Vue 3, Pinia, REST, GraphQL, Vite, Azure Static Web Apps, ASP.NET Core, MediatR, FluentValidation
    project_purpose: Academic Management System
  • After the “## Updating the Project Overview” section add a section describing the submission of this prompt: create a prompt file that creates technology specific instruction files for the technologies specified in the #file:project-overview.instructions.md
  • Drop the CQRS example and replace it with as section describing this prompt: Submit the #file:create-technology-instructions.prompt.md for each of the technologies specified in the #file:project-overview.instructions.md file
  • Add a section describing the submission of this prompt: Submit the #file:create-technology-instructions.prompt.md for each of the technologies specified in the #file:project-overview.instructions.md file
  • Write a paragraph describing these instructions #file:aspnetcore-implementation.instructions.md
  • Write a paragraph describing these instructions #file:mediatr-implementation.instructions.md
  • Write a paragraph describing these instructions #file:xunit-implementation.instructions.md
  • Write a paragraph describing these instructions #file:vue3-implementation.instructions.md
  • Write a paragraph describing these instructions #file:csharp-implementation.instructions.md
  • Write a paragraph describing these instructions #file:typescript-frontend-implementation.instructions.md
  • Write a paragraph describing these instructions #file:fluentvalidation-implementation.instructions.md
  • Write a paragraph describing these instructions #file:vitest-implementation.instructions.md
  • Write a paragraph describing these instructions #file:pinia-implementation.instructions.md