VS Code 1.119 shipped on May 6, 2026, and the most interesting changes are not about the chat box itself. They improve the working loop around the AI agent: integrated browser access, OpenTelemetry signals, and more precise safety controls.
This is not a “new shiny button, forget tomorrow” release. If your team is already experimenting with agentic coding, these small-looking changes decide whether the agent becomes a useful pair engineer or just a confident generator of chaotic diffs.
What changed
Microsoft highlights three practical areas in the 1.119 release for agent workflows.
First, browser tabs can be explicitly shared with the agent. The agent does not automatically get access to the integrated browser. You attach a page as context or put a browser tab into a sharing state. That boundary matters: the agent sees the specific page you allowed, not your whole browser life.
Second, Copilot Chat agent sessions can export OpenTelemetry. Traces can show LLM calls, tool calls, latency, token usage, and nested steps. For longer agent sessions this is almost a flight recorder: you can understand why the agent got stuck, where tokens went, and which tool failed.
Third, VS Code improved sandbox and approval controls. The new allowNetwork mode for chat.agent.sandbox.enabled keeps file system restrictions while removing network domain blocking. VS Code also interrupts sessions less often for temporary files in /tmp or %TEMP% when commands are already allowed for the session.
Why it matters
Agentic coding usually breaks not because “the model is bad”, but because the feedback loop is poorly configured.
A common scenario: the agent edits a React component, cannot see the page, guesses the result, asks for five approvals in a row, and then nobody can reproduce why it called those tools. The human spends more time supervising the agent than the automation saved.
VS Code 1.119 helps create a healthier loop:
- the agent gets access only to the page it needs;
- it can verify the result in a browser tab;
- the team can inspect a trace of the session;
- the sandbox reduces accidental damage;
- approvals remain where the risk is real.
How to configure it for a team
This is not a full release-note tour. It is a short operational playbook you can run through in 30–40 minutes on one frontend or full-stack project.
Step 1. Define which pages the agent may see
Do not start with “let’s give the agent a browser because it is faster”. Start with a list of allowed URLs.
For local web development, a reasonable starter list looks like this:
http://localhost:3000/*
http://localhost:5173/*
http://127.0.0.1:*/health
Admin panels, billing pages, production data, personal email, CRM screens, and real customer records should not be shared by default. Even if the agent “only reads”, it can use the observed context in later actions.
In VS Code, check the browser tools in the chat tools picker. For web verification, the agent usually needs navigation, screenshot, read-page, and click/type actions. The useful detail: pages opened by the agent run in private in-memory sessions by default, so they do not share cookies or storage with your normal browser tabs.
Step 2. Share a browser tab only for a specific task
The rule is simple: one task, one minimal browser context.
Bad prompt:
Check the whole site and fix the design.
Better:
I attached a browser tab with http://localhost:5173/pricing.
Check only the pricing cards layout at mobile width.
Do not navigate to billing/admin/login.
If you need another URL, explain why first.
Now the agent has a clear boundary: what to inspect, what not to touch, and when to ask. This reduces noise and makes the result reviewable.
Step 3. Enable OpenTelemetry for longer sessions
If your team already has an OTel backend, VS Code 1.119 gives you a practical way to see inside an agent session.
Minimal settings version:
{
"github.copilot.chat.otel.enabled": true,
"github.copilot.chat.otel.otlpEndpoint": "http://localhost:4318"
}
Or through environment variables:
export COPILOT_OTEL_ENABLED=true
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
export OTEL_SERVICE_NAME=copilot-chat-dev
In a trace, you are not looking for a pretty chart. You are looking for practical answers:
- which model was actually called;
- how many tokens the task consumed;
- which tools ran;
- where latency was highest;
- which tool call ended with an error;
- whether any unexpected external calls happened.
Be careful with COPILOT_OTEL_CAPTURE_CONTENT: full prompt and response content can include secrets, code fragments, or customer data. Most teams should start without content capture and collect metadata only.
Step 4. Choose the sandbox and network control mode
The key is not to mix two different scenarios.
Scenario A: strict mode for sensitive projects
Enable the sandbox and network filtering, then allow domains explicitly:
{
"chat.agent.sandbox.enabled": "on",
"chat.agent.networkFilter": true,
"chat.agent.allowedNetworkDomains": [
"registry.npmjs.org",
"api.github.com",
"localhost"
]
}
This mode fits corporate code, production API integrations, and tasks where the agent should not freely browse the internet.
Scenario B: fast local inner loop
If the agent needs to install packages, run a dev server, call local APIs, and not stop at every domain, use allowNetwork:
{
"chat.agent.sandbox.enabled": "allowNetwork"
}
In this mode, file system isolation remains, but domain allowlists and denylists are not controlling network access. It is convenient for local development, but do not call it “sandbox plus strict network filtering”. It is a different tradeoff.
Step 5. Keep important approvals alive
VS Code 1.119 reduces approval fatigue for temporary files: if commands are allowed for the session, writes to /tmp or %TEMP% should no longer interrupt the agent loop constantly.
That does not mean you should disable everything. Keep manual approval for:
- commands that write outside the workspace;
- access to production domains;
- operations involving secrets;
git push, release, deploy, or publish;- large edits to configuration files;
- unknown scripts from dependencies.
The goal is not to make the agent ask permission to breathe. The goal is to prevent it from silently taking an expensive or dangerous step.
A team workflow for every agent session
Here is a simple template you can add to a team README.
Before the session:
Task: what exactly is changing
Allowed browser tabs: list of URLs
Allowed network: domain list or allowNetwork mode
Forbidden areas: production, billing, secrets, admin
Success check: how we know the task is done
During the session:
- the agent works within one clear scope;
- a new browser tab is added only after an explanation;
- shell commands must not write outside the workspace without approval;
- if the agent asks for risky approval, the human reads the full command.
After the session:
- review the diff;
- run a test or build;
- inspect the trace if the task was long;
- write down one improvement for instructions or allowlists.
This sounds a little bureaucratic, but it takes less time than debugging why the agent deleted the wrong file.
What not to do
Do not share sensitive tabs “just for a minute”. If a page contains personal data, tokens, billing, or production controls, it is not agent context.
Do not assume allowNetwork and domain allowlists work together. In allowNetwork mode, network blocking is removed. If you need strict domain control, use a different mode.
Do not collect full prompt content in OpenTelemetry without a policy. Metadata is useful. Full text can become a risk.
Do not enable Bypass Approvals without a sandbox or container. Autonomy without isolation feels fast until it hurts.
Do not judge the agent only by the final answer. Look at tool calls, diff, tests, and trace. That is where the real quality of work shows up.
Conclusion
VS Code 1.119 does not make AI agents more magical. It makes them more governable. That is good news: browser tabs give the agent eyes, OpenTelemetry gives the team memory, and sandbox plus approvals give the workflow boundaries.
Quick action plan:
- Create an allowed browser-tabs list for local development.
- Enable OTel export at least for longer agent sessions.
- Choose one mode: strict sandbox plus network filtering, or
allowNetworkfor a local inner loop. - Keep approvals for production, secrets, deploys, and writes outside the workspace.
- After every complex session, review not only the diff but also the trace.
If the team does these five things, the agent stops being a black box with terminal access and becomes a normal tool in the developer workflow.
Official sources:
- https://code.visualstudio.com/updates/v1_119
- https://code.visualstudio.com/docs/copilot/guides/browser-agent-testing-guide
- https://code.visualstudio.com/docs/copilot/guides/monitoring-agents
- https://code.visualstudio.com/docs/copilot/concepts/trust-and-safety#_agent-sandboxing
- https://code.visualstudio.com/docs/debugtest/integrated-browser
Quick checklist
- Define which browser tabs the agent may inspect for the task
- Enable OpenTelemetry for agent sessions if an OTel backend exists
- Choose the sandbox mode: strict allowlist or allowNetwork for local development
- Keep manual approval for secrets, production domains, and writes outside the workspace
- Review the trace or short tool-call log after the session
Prompt Pack: configure a safer AI-agent workflow in VS Code
You are the technical lead of a team that already uses AI agents in VS Code for web development. Input data: - project type: frontend / full-stack / backend; - which pages or localhost URLs the agent may inspect; - which domains the agent may call; - whether the team has an OpenTelemetry collector; - agent autonomy level: default approvals / allow session commands / autopilot. Prepare a short operational playbook for VS Code 1.119: 1. which browser tabs may be shared with the agent, and which must not; 2. which OTel settings or environment variables to enable; 3. which sandbox/network control mode to choose; 4. which approval prompts must remain mandatory; 5. how the team reviews an agent session after it finishes. Output format: decision table + 5-step checklist + "do not do this" section.