Reqnroll v3 migration: Dealing with your custom [Given], [When], [Then] attributes

In a previous post, I showed how to make your step-definition code more maintainable in SpecFlow. If you migrate from SpecFlow/Reqnroll v2 to Reqnroll v3 then this post is for you!

TL;DR: Updated attribute for Reqnroll v3

Assuming you’re on .NET 8+ this class works for you. If you’re on an earlier version, just replace the collection expression with your preferred initializer.

public class GivenWhenAttribute : StepDefinitionBaseAttribute
{
static readonly StepDefinitionType[] types = [
StepDefinitionType.Given,
StepDefinitionType.When
];
public GivenWhenAttribute() : this(null!) { }
public GivenWhenAttribute(string expression) : base(expression, types) { }
}

Challenge: Culture no longer exists

The old SpecFlow-era overload set a Culture member via a constructor. In Reqnroll v3, that member is gone, so we remove this overload:

public GivenWhenAttribute(string regex, string culture)
: this(regex) { Culture = culture; }

Challenge: step definition not discovered

After solving the Culture member, everything compiled fine, but binding discovery failed with this exception:

Message: 
Reqnroll.MissingStepDefinitionException : No matching step definition found for one or more steps.
Stack Trace: 
ErrorProvider.ThrowPendingError(ScenarioExecutionStatus testStatus, String message)
TestExecutionEngine.OnAfterLastStepAsync()
TestRunner.CollectScenarioErrorsAsync()
ReproductionFeature.ScenarioCleanupAsync()
ReproductionFeature.CantFindStep() line 8
...

Reqnroll discovers step definitions via reflection. It looks for methods having an attribute derived from StepDefinitionBaseAttribute

In V3 the parameter names of the constructors on your attribute must match Reqnroll’s expected names. If they don’t, Reqnroll does not recognize the attribute and discovery fails. Just changing the parameter name from regex to expression solves this problem.

Load Testing Agents and MCP Servers over SSE with K6

If your AI Agent or MCP server uses Server-Sent Events (SSE), you can’t load test it like a normal REST API and call it done. SSE is all about long-lived connections, where data streams back to client for a truly interactive real-time feeling.

This guide is a practical howto for testing SSE transports used by agents/MCP services-focused on what breaks in production and how to catch it early. See also this post on load testing recipes for other things.

This post targets SSE-based/legacy MCP deployments. For new deployments, readers should evaluate Streamable HTTP

Why this matters

In addition to the usual reasons for load testing any server, a LLM’s response is way more time consuming than HTTP REST-APIs. Your agent streams characters back to the user so they see the response arriving piece by piece over many seconds.

Many frameworks used for agents by default run with a small amount of workers enabled. While a worker is dealing with the prompt, it cannot pick-up more incoming requests. It’s easy for an agent to saturate even with a small amount of incoming requests.

A single prompt often triggers multiple MCP tool calls to an MCP server. It’s important to have confidence both your agent and your own MCP server(s) can handle it.

Quick and dirty SSE overview

A typical request to an AI agent or MCP Server, running with SSE, goes through these phases.

Note: MCP Servers can have different protocols such as stdio and streaming HTTP. This post is about the SSE protocol.

Client sends a POST /foo request with the usual TCP and HTTP semantics. Its body is specific for your agent / MCP Server.

The server streams lines of text back to the client. In SSE, events are delimited by a blank line. An event is composed of one or more field: value lines (data:, ) Here’s an example for a session with an agent doing the AG-UI protocol

data:{"type":"RUN_STARTED", ...}
data:{"type":"TEXT_MESSAGE_CONTENT", "delta":"Hello"}
data:{"type":"TEXT_MESSAGE_CONTENT", "delta":" World"}
data:{"type":"TEXT_MESSAGE_CONTENT", "delta":"!"}
data:{"type":"TEXT_MESSAGE_END" }
data:{"type":"TOOL_CALL_START", ...}
data:{"type":"TOOL_CALL_RESULT", ...}
data:{"type":"TOOL_CALL_END", ...}
data:{"type":"RUN_FINISHED", ...}

K6 SSE script

K6 is great for repeatable CI-friendly tests! The k6/x/sse community extension helps us consume SSE streams. If you’re running K6 version 1.2 or higher then you don’t even need to create custom build! It uses its automatic extension resolution, to ensure the extension is downloaded and integrated.

//Intellisense thinks it's an unresolvable module
//just ignore the linting error for next line
//@ts-ignore
import sse from 'k6/x/sse'
import { check } from 'k6'
export default function() {
const params = {
method: "POST",
body: JSON.stringify({
jsonrpc: "2.0",
id:1,
method: "tools/call",
params: {
name: "mcp-servers-tool-name",
arguments: {
arg1: "Hello"
}
}
}),
headers: {
"Accept": "*/*",
"Content-Type": "application/json",
},
timeout: "10s"
}
//This sends the HTTP request. The handler function is responsible
//for dealing with the received SSE events streamed back over the
//connection.
const res = sse.open("https://my-server/mcp", params, myHandler)
//The return value of open() contains the HTTP status code in
//res.status. Although it looks similar to a k6 http response,
//it's a different object and lacks members like body and json()
check(res, {"no http errors": (res) => res.status === 200 })
function myHandler(client: any) {
client.on("error", () => {
//This example uses the normal k6 check() method
//to record errors on SSE level
check(true, {"No SSE errors allowed": () => false})
})
client.on("event", (event: any) => {
try {
const mcpResponse = JSON.parse(event.data)
check(mcpResponse, {
"No errors from MCP server allowed": (r) => !r.result?.isError
})
//Add any more checks you need to verify your MCP server
//response contains sensible data
} catch {
check(false, { "MCP response must be valid JSON": () => false})
}
})
}
}

Relevant metrics for your test

For an MCP Server tool. If http reqs per second and events per second rates continuously diverge, then the server is saturated and has exceeded its peak capacity. It might be interesting to see how long the server can survive such a peak and if it recovers after load decreases. K6 scenarios are your friend here.

For an AI agent, having a custom metric to verify the time-to-first token is a great indicator how the end users will experience the conversation flow.

Comparison of load and stability testing approaches: end-to-end, component-level, and unit/integration loop tests with pros and cons

Recipes for load and stability testing

The objective of load and stability testing is to determine if a System Under Test (SUT) is able to meet its performance and/or stability requirements. A secondary objective is to determine how far the system can be pushed before it breaks.

Key challenges include:

  1. How can we make sure the tests and the SUT use suitable data? You need some way of ensuring the SUT and test scripts have a common understanding of what data exists at the start of the test.
  2. How can we deploy and start the SUT without its neighboring components being available or suitable for load testing?
  3. What about authentication blocking us from running automated tests?
  4. How can we test the impact of faults, caused by external components, when we don’t control those external components?

Here are some recipes ranging from complex to simple. No single approach fits every situation. Each has a best fit at some stage in the Software Development Life Cycle (SDLC).

End-to-end load tests

The entire product with all its software components, internal data and ‘customer’ data is deployed into some production-like environment.

An example:

  1. Deploy all microservices onto a Kubernetes cluster.
  2. Use scripting to seed databases or use real-world (anonymized) databases and extract whatever content the tests need from those databases.
  3. Use tools and services such K6, JMeter, Locust, Gatling, Taurus, NeoLoad, BlazeMeter to generate load.
  4. Use your DevOps tooling to monitor and analyze the SUT’s behavior.

Pros

Impact of resource limits, network set-up and caches are validated with this approach.

Unpredictable interactions between components become visible.

Impact of real-world data sets become visible.

Cons

It’s costly and time consuming to arrange a suitable environment.

Deployment of components can be time consuming. Especially as the environment is likely to start in an undefined state due to previous test runs and deployments of feature branches.

Authentication and intrusion detection can block running these types of tests. Obtaining valid credentials might be impossible when Multi Factor Authentication is active.

Laws and regulations may block you from using real-world data.

Bringing databases into a suitable starting state can be difficult and time consuming. Especially when the data is distributed over multiple services with their own storage technology.

Component load/stability tests

A single component with its own internal data and fake, but large enough, ‘customer’ data is tested in isolation. It starts, gets tested and deleted.

A simple example is:

  1. Add a single microservice, the SUT, into a container image.
  2. Use frameworks such as Faker and Bogus to generate data for the SUT, mocks and repository databases.
  3. Add mock servers and their data into a container image. Something like WireMock or some simple custom-built HTTP server.
  4. Use a real database server in a container to host your repository data.
  5. If you need to simulate intermittent faults on repository access, you can mock the repository classes to throw an exception x% of the time. In the sunny day scenario the mock simply forwards the request to the real repo class. Use frameworks such as Moq and Mockito to generate the mocks around the real repository classes.
  6. Add a load generator and the test scripts into the container.
  7. Configure the SUT to talk to the mock servers and repository / database server in the container.
  8. Run the container(s) as part of your CI/CD pipelines. Maybe using docker-compose on a single CI/CD agent or maybe in some temporary Kubernetes namespace.

If running everything in a few containers is not suitable, you can distribute the SUT, mocks, databases and load generator over multiple container hosts and/or external service providers. For example:

  1. Use providers of mocking tools to spin-up mock servers. Some providers are Mockoon, BlazeMeter Service Virtualization and many more.
  2. Same idea for database servers.
  3. Add the SUT into a container image and configure it to talk to the mock servers and databases.
  4. Start the SUT’s container image on some online platform (Kubernetes cluster, Azure Container Instances etc. etc.)
  5. Run the load test from some load test service provider like NeoLoad or BlazeMeter.

Pros

Having the SUT tested in isolation removes many blockers end-to-end tests often face. It’s easier to build and maintain these tests. Less work to run and analyze them.

Tests act as an automated QA gate for a component. Failures are visible earlier in the SDLC.

Using mocks and custom repository classes makes it easy to control injection of faults. It’s trivial to have a mock server or repository class simulate a failure in x% of the cases.

Cons

The SUT needs to be engineered in such a way that it’s possible to use your mocks, database-servers / repository classes. Legacy code might block this and need refactoring.

Usually you won’t be testing with real resource limits that end-to-end tests would run with.


Unit/integration tests for performance-critical code in a tight loop

Use the unit test framework of the programming environment to run some critical code for a specific length of time or until a max number of iterations is reached.

An example would be to implement an NUnit/JUnit test to run some algorithm many times against various random data sets. Use frameworks such as Moq and Mockito to avoid depending on external systems and control what data the SUT gets back from the mocked dependencies. Use frameworks such as Faker, Bogus or property-based test generators to generate the randomized inputs to the algorithm.

Pros

Runs fast.

Does not depend on availability and state of external systems as they are mocked-away.

Integrates well with IDE’s.

Cons

Impact of interacting with external systems is not tested here. This will miss defects on topics such as connection pooling, buffering, latency and caching.

A tight loop is usually single threaded, it will miss defects caused by concurrency problems.

Failures during CI runs are hard to analyze as unit test frameworks are not generally geared to providing post-mortem analysis of time based metrics. Integration with a profiler of your programming language is likely to be needed.

Custom generators for property-based tests in C#/.NET (CsCheck)

Here’s how you can create a custom generator for your property-based test in C# using CsCheck.

Example domain object

Assume we need to generate instances of this class:

class MyClass
{
    public int MyInt { get; set; }
    public string MyString { get; set; }
}

Building the custom generator

For the sake of this example, assume the generated instances must conform to these rules:

  1. The integer must be greater than 0.
  2. The string must be one of these: x, ✓, A string with 10 to 20 random characters.

Here’s what the generator looks like:

class MyGenerator
{
    public static Gen<MyClass> MyClass =>
        from i in Gen.Int where i > 0 
        from s in Gen.OneOf(Gen.String[10, 20], 
            Gen.Const("x"), 
            Gen.Const("✓"))
        select new MyClass
        {
            MyInt = i,
            MyString = s,
        };
}

Validating generated inputs

Here’s how to use it:

    [Fact]
    public void EachInputMustBeCorrect()
    {
        MyGenerator.MyClass.List.Sample(input =>
        {
            if(input.Count == 0) { return; }
            input.Should().AllSatisfy(MustBeCorrect);
        });
    }

    void MustBeCorrect(MyClass o)
    {
        o.MyInt.Should().BeGreaterThan(0);
        o.MyString.Should().Match(x => x == "x"  || 
            x == "✓" || 
            (x.Length >= 10 && x.Length <= 20)
        );
    }

Kubernetes + containerd: Run an Image Without Pushing to a Registry

When to use this approach

Need to start a container right now, but the image isn’t in a registry yet? This approach is for you. We’re going to export your image as a tar file and import it into Kubernetes’ containerd

⚠️ Limitations

Consider this a temporary measure while you arrange a registry because:

  • Pods will fail if the image is not loaded on the node!
  • Any changes to the image require you to update it on all nodes.
  • New nodes in the cluster also need the image to be imported.

Export image

Make sure you have the image available as a tar file. Here’s how to save one from a machine with Docker:

docker save repository/image --output ./image.tar

Transfer and import on each node

Copy the tar file to each node with a command like:

scp image.tar node:/tmp/

Run this command on each node to load the image into containerd:

sudo ctr -n k8s.io images import image.tar

Verify import succeeded with:

sudo ctr -n k8s.io images ls

Deploy with correct pull policy

To use this image in a Kubernetes deployment, make sure it’s set to imagePullPolicy: Never

Here’s an example deployment:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: my-app
  labels:
    app: my-app

spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: my-app
          image: xxxxx/my-app
          imagePullPolicy: Never

How to install Kubernetes onto physical machines for a home lab

On each machine: Install Ubuntu Server LTS 24.04

Ensure you can SSH into it and enable password less sudo

echo "$USER ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/$USER

This helps in running commands on each machine in parallel.

On each machine: Install kubeadm

Based on Bootstrapping clusters with kubeadm

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | $ sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo swapoff -a

On each machine: Install containerd

Kubernetes recently deprecated usage of dockerd as the container runtime. So we’ll use containerd directly based on Anthony Nocentino’s blog: Installing and Configuring containerd as a Kubernetes Container Runtime

Configure the required kernel modules:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter

Configure persistence across system reboots

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

Install containerd packages

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update && sudo apt-get install containerd.io

Create a containerd configuration file

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

Set the cgroup driver to systemd. Kuberbetes uses systemd while containerd uses something else. They must both use the same setting:

sudo sed -i 's/            SystemdCgroup = false/            SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

Only on the first (master) machine

Initialize the K8s cluster. Save this output somewhere, you’ll need the kubeadm join ... part later.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.30.2
[preflight] Running pre-flight checks
...
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0629 19:20:06.570522   14350 checks.go:844] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox image.
...
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.178.171:6443 --token v1flk8.wy9xyikw6kosevps \
        --discovery-token-ca-cert-hash sha256:e79a8516a0990fa232b6dcde15ed951ffe46880854fe1169ceb3b909d82fff00

On each machine: Follow the recommendation of kubeadmin to update the sandbox image.

Use a text editor to replace sandbox_image = "registry.k8s.io/pause:3.8" with sandbox_image = "registry.k8s.io/pause:3.9"

restart containerd

sudo systemctl restart containerd.service

On the master node: Ensure kubectl knows what cluster you work with

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

On each other machine: join them to the cluster:

kubeadm join 192.168.178.171:6443 \ 
    --token v1flk8.wy9xyikw6kosevps \
    --discovery-token-ca-cert-hash sha256:e79a8516a0990fa232b6dcde15ed951ffe46880854fe1169ceb3b909d82fff00

On the master node: Configure the POD network

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
kubectl apply -f kube-flannel.yml

Installation finished, check status:

kubectl get nodes

should give output like:

NAME        STATUS   ROLES           AGE     VERSION
optiplex1   Ready    control-plane   2d23h   v1.30.2
optiplex2   Ready    <none>          2d23h   v1.30.2
optiplex3   Ready    <none>          2d23h   v1.30.2

Reuse a SpecFlow step definition for Given and When

Have you ever wanted to use a single step definition like I create file '(.*)' from Given and When contexts in a feature file like this?

Scenario: CreateFile
When I create file 'hello.txt'
Then ...
Scenario DeleteFile
Given I create file 'hello.txt'
When I delete 'hello.txt'

Here’s how to have 1 method that implements both the Given and the When I create file '...'

💡 Prefer the newer Reqnroll v3 version?

This post explains the original SpecFlow approach. If you’re using Reqnroll v3 read the updated version instead.

First, create a class like this:

public class GivenWhenAttribute : StepDefinitionBaseAttribute
{
readonly static StepDefinitionType[] types = new[] { StepDefinitionType.Given, StepDefinitionType.When };
public GivenWhen() : this(null) { }
public GivenWhen(string regex) : base(regex, types ) { }
public GivenWhen(string regex, string culture) : this(regex) { Culture = culture; }
}

Then use it in the [Binding] classes like this:

[GivenWhen("I create file '(.*)'")]
public void CreateFile(String Name) { File.Create(Name); }

Voila! 1 step definition method now works for Given and When steps in the feature file.

Adding PACT Contract Testing to an existing golang code base

Here’s how to add a Contract Test to a Go microservice, a provider in pact terminology, using pact-go This post uses v1 of pact-go, as the V2 version is stil in beta.

Install PACT cli tools on your dev machine

A linux or dev-container based environment can just use the same approach as the CI pipeline documented later on in this post.

For a Windows dev machine, install the pact-cli tools like this:

  1. Use browser to download https://github.com/pact-foundation/pact-ruby-standalone/releases/download/v2.0.2/pact-2.0.2-windows-x86.zip
  2. unzip to c:\Repos
  3. Change PATH to include c:\repos\pact\bin
  4. Restart any editors or terminals
  5. Run go install gopkg.in/pact-foundation/pact-go.v1

Create a unit test to validate the service meets the contract/pact

Add unit test like below. Notice these settings and how the environment variables affect them. This helps when tests need to run both on the local development machine as well as on CI/CD machines.

Setting Effect
PublishVerificationResults Controls if pact should publish the verification result to the broker. For my local dev machine I dont publish. From the CI pipeline I do publish
ProviderBranch The name of the branch for this provider version.
ProviderVersion The version of this provider. My CI pipeline ensures variable PACT_PROVIDER_VERSION contains the unique build number. On my local machine its just set to 0.0.0
package app
import (
    "fmt"
    "bff/itemrepository"
    "os"
    "strconv"
    "strings"
    "testing"

    "github.com/pact-foundation/pact-go/dsl"
    "github.com/pact-foundation/pact-go/types"
    "github.com/stretchr/testify/assert"
)

func randomItem(env string, name string) itemrepository.item{
    return itemrepository.item{
        Environment:               env,
        Name:                      name,
    }
}

func getEnv(name string, defaultVal string) string {
    tmp := os.Getenv(name)
    if len(tmp) > 0 {
        return tmp
    } else {
        return defaultVal
    }
}

func getProviderPublishResults() bool {
    tmp, err := strconv.ParseBool(getEnv("PACT_PROVIDER_PUBLISH", "false"))
    if err != nil {
        panic(err)
    }
    return tmp
}

func TestProvider(t *testing.T) {
    //Arrange: Start the service in the background.
    port, repo, _ := startApp(false)
    pact := &dsl.Pact{ Consumer: "MyConsumer", Provider: "MyProvider", }

    //Act: Let pact spin-up a mock client to verifie our service.
    _, err := pact.VerifyProvider(t, types.VerifyRequest{
            ProviderBaseURL:            fmt.Sprintf("https://localhost:%d", port),
            BrokerURL:                  getEnv("PACT_BROKER_BASE_URL", ""),
            BrokerToken:                getEnv("PACT_BROKER_TOKEN", ""),
            PublishVerificationResults: getProviderPublishResults(),
            ProviderBranch:             getEnv("PACT_PROVIDER_BRANCH", ""),
            ProviderVersion:            getEnv("PACT_PROVIDER_VERSION", "0.0.0"),
            StateHandlers: types.StateHandlers{
                "I have a list of items": func() error {
                    repo.Set("env1", []itemrepository.item{randomItem("env1", "tenant1")})
                        return nil
                },
            },
        })
    pact.Teardown()

    //Assert
    assert.NoError(t, err)
}

Ensure the CI pipeline runs the test and publishes verification results

For my CI builds, I run this to make sure the CI machine has the pact-cli tools installed
cd /opt
curl -fsSL https://raw.githubusercontent.com/pact-foundation/pact-ruby-standalone/master/install.sh | bash
export PATH=$PATH:/opt/pact/bin
go install github.com/pact-foundation/pact-go@v1
...
pipeline already runs the unit tests
...

Check `can-i-deploy` in the CD pipeline

Change the CD pipeline to

  • verify this version of the service has passed the contract test, Otherwise do not deploy to production
  • After successful deployment, inform broker which new version is running in production.

My CD pipeline runs on different machines, so it again has to ensure PACT is installed, just like the CI pipeline:

# pipeline variables
PACT_PACTICIPANT=MyProvider
PACT_ENVIRONMENT=production
PACT_BROKER=https://yourtenant.pactflow.io
PACT_BROKER_TOKEN=...a read/write token...preferably a system user to represent CI/CD actions...

# task to install PACT
cd /opt
curl -fsSL https://raw.githubusercontent.com/pact-foundation/pact-ruby-standalone/master/install.sh | bash
echo "##vso[task.prependpath]/opt/pact/bin"
...
...
# Task to check if the version of the build can be deployed to production
pact-broker can-i-deploy --pacticipant $PACT_PACTICIPANT --version $BUILD_BUILDNUMBER --to-environment $PACT_ENVIRONMENT --broker-base-url $PACT_BROKER --broker-token $PACT_BROKER_TOKEN
...
#Do whatever tasks you need to deploy the build to the environment
...
...
#Task to record the deployment of this version of the producer to the environment
pact-broker record-deployment --environment $PACT_ENVIRONMENT --version $BUILD_BUILDNUMBER --pacticipant $PACT_PACTICIPANT --broker-base-url $PACT_BROKER --broker-token $PACT_BROKER_TOKEN

Adding PACT Contract Testing to an existing TypeScript code base

I like Contract Testing! I added a contract test with PACT-js and Jest for my consumer like this:

Installing PACT

  1. Disable the ignore-scripts setting: npm config set ignore-scripts false
  2. Ensure build chain is installed. Most linux based CI/CD agents have this out of the box. My local dev machine runs Windows; according to the installation guide for gyp the process is:
    1. Install Python from the MS App store. This takes about 5 minutes.
    2. Ensure the machine can build native code. My machine had Visual Studio already so I just added the ‘Desktop development with C++’ workload using the installer from ‘Tools -> Get Tools and Features’ This takes about 15 – 30 minutes
    3. npm install -g node-gyp
  3. Install the PACT js npmn package: npm i -S @pact-foundation/pact@latest
  4. Write a unit test using either the V3 or V2 of the PACT specification. See below for some examples.
  5. Update your CI build pipeline to publish the PACT like this: npx pact-broker publish ./pacts --consumer-app-version=$BUILD_BUILDNUMBER --auto-detect-version-properties --broker-base-url=$PACT_BROKER_BASE_URL --broker-token=$PACT_BROKER_TOKEN

A V3 version of a PACT unit test in Jest

//BffClient is the class implementing the logic to interact with the micro-service.
//the objective of this test is to:
//1. Define the PACT with the microservice
//2. Verify the class communicates according to the pact

import { PactV3, MatchersV3 } from '@pact-foundation/pact';
import path from 'path';
import { BffClient } from './BffClient';

// Create a 'pact' between the two applications in the integration we are testing
const provider = new PactV3({
    dir: path.resolve(process.cwd(), 'pacts'),
    consumer: 'MyConsumer',
    provider: 'MyProvider',
});

describe('GET /', () => {
    it('returns OK and an array of items', () => {
        const exampleData: any = { name: "my-name", environment: "my-environment", };

        // Arrange: Setup our expected interactions. Pact mocks the microservice for us.
        provider
            .given('I have a list of items')
            .uponReceiving('a request for all items')
            .withRequest({method: 'GET', path: '/', })
            .willRespondWith({
                status: 200,
                headers: { 'Content-Type': 'application/json' },
                body: MatchersV3.eachLike(exampleData),
            });
        return provider.executeTest(async (mockserver) => {
            // Act: trigger our BffClient client code to do its behavior 
            // we configured it to use the mock instead of needing some external thing to run
            const sut = new BffClient(mockserver.url, "");
            const response = await sut.get()

            // Assert: check the result
            expect(response.status).toEqual(200)
            const data:any[] = await response.json()
            expect(data).toEqual([exampleData]);
        });
    });
});

A V2 version

import { Pact, Matchers } from '@pact-foundation/pact';
import path from 'path';
import { BffClient } from './BffClient';

// Create a 'pact' between the two applications in the integration we are testing
const provider = new Pact({
    dir: path.resolve(process.cwd(), 'pacts'),
    consumer: 'MyConsumer',
    provider: 'MyProvider',
});

describe('GET', () => {
    afterEach(() => provider.verify());
    afterAll(() => provider.finalize());

    it('returns OK and array of items', async () => {
        const exampleData: any = { name: "my-name", environment: "my-environment", };
        // Arrange: Setup our expected interactions. Pact mocks the microservice for us.
        await provider.setup()
        await provider.addInteraction({
            state: 'I have a list of items',
            uponReceiving: 'a request for all items',
            withRequest: { method: 'GET', path: '/',  },
            willRespondWith: {
                status: 200,
                headers: { 'Content-Type': 'application/json' },
                body: Matchers.eachLike(exampleData),
            },
        })

        // Act: trigger our BffClient client code to do its behavior 
        // we configured it to use the mock instead of needing some external thing to run
        const sut= new BffClient(mockserver.url, "");
        const response = sut.get()
        
        // Assert: check the result
        expect(response.status).toEqual(200)
        const data: any[] = await response.json()
        expect(data).toEqual([exampleData]);
    });
});