Copilot Studio: Credits, Approvals, and Real-Time Protection
First up, we cover how Microsoft has rolled out several noteworthy updates to Copilot Studio, its platform for building AI “agents” and workflows.
Copilot Studio Credits (formerly Message Packs)
Microsoft has renamed the Copilot Studio “message packs” to “Copilot Credits.” Previously, organisations had to buy packs of 25,000 “messages” to enable Copilot Studio usage. It’s essentially the same cost per pack for 25k interactions, but calling them credits is clearer – not every agent action was a single chat message, so the old terminology caused confusion.
More importantly, Microsoft clarified that if a user has a Microsoft 365 Copilot licence, many agent actions won’t count against these credits. Standard Q&A and generative replies handled by Copilot agents for licensed users are included at no extra charge. In practice, we understand you’ll still need at least one Copilot Studio credit pack in your tenant to get started, but you’re unlikely to burn through it unless you’re doing heavy automation or fully autonomous agents. It’s a welcome tweak for customers – you can experiment with building Copilot agents without immediately worrying about a big bill.
Read more about the Copilot Studio Credit model on Microsoft Learn.
AI Approvals (Preview)
“Human-in-the-loop” control is getting easier, and often a key governance control required to support roll-out of autonomous agents. Microsoft have introduced AI Approvals in Copilot Studio, now in preview, which let you insert automated approval steps into your workflows. Essentially, an AI can make an approve/reject decision based on your criteria – for example, auto-approving expense reports under £500 and allowing human oversight where needed.
Read more on Microsoft’s blog announcement on AI approvals in Copilot Studio
Request for Information (Preview)
Alongside approvals, Microsoft added a new action type called Request for Information (RFI). This allows a Copilot Studio agent to pause and ask a human for input during an automated process. For instance, if a workflow hits a decision point that needs expert insight or missing data, it can send an email to a designated person (with a form for structured response) and wait for their reply before continuing. Unlike a simple yes/no approval, RFI can collect text, numbers, files, or other details from a person and feed that back into the automation.
Read more about the Request for Information in Copilot Studio
Near Real-Time Protection for Agents
The third big update is all about security for AI agents. Now in public preview, Copilot Studio offers an advanced runtime protection plug-in that lets you connect an external security system (like Microsoft Defender for Cloud Apps) to monitor and control agent actions in real time. In practice, this means that as an AI agent is about to execute an action (say, sending an email or running a script), it sends the details of its plan to your security system first for assessment. It’s basically a “bring your own security” model for Copilot: organizations can enforce their own compliance rules on AI behaviour. While Microsoft has baked in default protections against prompt injection attacks, we’ve covered in previous podcasts that these are not foolproof.
Read more on the Microsoft blog on near-real-time protection in Copilot Studio
Local AI: Run Your Own Copilot with Jan
We discuss Jan, an open-source tool that Paul has been using to test functionality like Model Context Protocol locally, before making API/MCP endpoints available for use by Copilot or Copilot Studio. Paul has been using Jan and described it as “the open-source equivalent of Claude Desktop”. You can plug in an API key to use the cloud or run a local large language model. Paul spent a lot of time previously tinkering with another project (Olama and an MCP bridge) to test having local models execute tools, which was pretty complicated.
We talked about how Jan might tie into Microsoft’s ecosystem. Microsoft has a Copilot retrieval API on the horizon, which could mean a tool like Jan could, in theory, connect to it as if it were another “model”. This could let you test interaction with API endpoints locally using similar instructions to Copilot agents, before deploying anything in the cloud.
Read more in Paul’s article: Practical AI: Make Your LLM Local with Jan
Microsoft Changes Its Mind on Remote Work
The episode wraps up with a discussion about Microsoft’s internal policy shift on remote work – a move that generated a lot of chatter (and some eye-rolling). Back in the pandemic days, Microsoft was championing the virtues of remote and hybrid work, boasting about how productivity was up and employees could “work from anywhere.” Fast forward to now: Microsoft has announced a new “flexible work” policy that strongly encourages employees to be in the office at least three days per week. In practice, it’s a return-to-office mandate.
What really caught our attention, however, was how Microsoft handled this announcement. The company has apparently quietly scrubbed some of its old blog posts that had previously praised remote work. On the podcast, Paul and I both agree: it’s perfectly valid for Microsoft to change its stance based on new data or business priorities, but they should own that narrative, not erase it.
As observers (and as Microsoft MVPs ourselves), we’ll be watching how this policy evolves and how it might ripple into Microsoft’s products or customer guidance. For now, it’s a reminder that even tech giants are figuring out the balance between remote and office work, and the conversation is far from over.
That’s all for this week’s show! If you have thoughts on any of these topics, feel free to reach out or leave a comment. We’ll be back next time with more Microsoft 365 news and insights (and maybe a few conference updates as Ignite draws near).